[spark] branch master updated (5effa8e -> 4a47b3e)

2020-10-08 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 5effa8e  [SPARK-33091][SQL] Avoid using map instead of foreach to 
avoid potential side effect at callers of OrcUtils.readCatalystSchema
 add 4a47b3e  [DOC][MINOR] pySpark usage - removed repeated keyword causing 
confusion

No new revisions were added by this update.

Summary of changes:
 docs/submitting-applications.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (5effa8e -> 4a47b3e)

2020-10-08 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 5effa8e  [SPARK-33091][SQL] Avoid using map instead of foreach to 
avoid potential side effect at callers of OrcUtils.readCatalystSchema
 add 4a47b3e  [DOC][MINOR] pySpark usage - removed repeated keyword causing 
confusion

No new revisions were added by this update.

Summary of changes:
 docs/submitting-applications.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (5effa8e -> 4a47b3e)

2020-10-08 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 5effa8e  [SPARK-33091][SQL] Avoid using map instead of foreach to 
avoid potential side effect at callers of OrcUtils.readCatalystSchema
 add 4a47b3e  [DOC][MINOR] pySpark usage - removed repeated keyword causing 
confusion

No new revisions were added by this update.

Summary of changes:
 docs/submitting-applications.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (5effa8e -> 4a47b3e)

2020-10-08 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 5effa8e  [SPARK-33091][SQL] Avoid using map instead of foreach to 
avoid potential side effect at callers of OrcUtils.readCatalystSchema
 add 4a47b3e  [DOC][MINOR] pySpark usage - removed repeated keyword causing 
confusion

No new revisions were added by this update.

Summary of changes:
 docs/submitting-applications.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (5effa8e -> 4a47b3e)

2020-10-08 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 5effa8e  [SPARK-33091][SQL] Avoid using map instead of foreach to 
avoid potential side effect at callers of OrcUtils.readCatalystSchema
 add 4a47b3e  [DOC][MINOR] pySpark usage - removed repeated keyword causing 
confusion

No new revisions were added by this update.

Summary of changes:
 docs/submitting-applications.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-3.0 updated (4f71231 -> d51b8d6)

2020-10-06 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 4f71231  [SPARK-33073][PYTHON] Improve error handling on Pandas to 
Arrow conversion failures
 add d51b8d6  [SPARK-27428][CORE][TEST] Increase receive buffer size used 
in StatsdSinkSuite

No new revisions were added by this update.

Summary of changes:
 .../spark/metrics/sink/StatsdSinkSuite.scala   | 29 --
 1 file changed, 22 insertions(+), 7 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (0812d6c -> b5e4b8c)

2020-10-06 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 0812d6c  [SPARK-33073][PYTHON] Improve error handling on Pandas to 
Arrow conversion failures
 add b5e4b8c  [SPARK-27428][CORE][TEST] Increase receive buffer size used 
in StatsdSinkSuite

No new revisions were added by this update.

Summary of changes:
 .../spark/metrics/sink/StatsdSinkSuite.scala   | 29 --
 1 file changed, 22 insertions(+), 7 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (0812d6c -> b5e4b8c)

2020-10-06 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 0812d6c  [SPARK-33073][PYTHON] Improve error handling on Pandas to 
Arrow conversion failures
 add b5e4b8c  [SPARK-27428][CORE][TEST] Increase receive buffer size used 
in StatsdSinkSuite

No new revisions were added by this update.

Summary of changes:
 .../spark/metrics/sink/StatsdSinkSuite.scala   | 29 --
 1 file changed, 22 insertions(+), 7 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-3.0 updated (4f71231 -> d51b8d6)

2020-10-06 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 4f71231  [SPARK-33073][PYTHON] Improve error handling on Pandas to 
Arrow conversion failures
 add d51b8d6  [SPARK-27428][CORE][TEST] Increase receive buffer size used 
in StatsdSinkSuite

No new revisions were added by this update.

Summary of changes:
 .../spark/metrics/sink/StatsdSinkSuite.scala   | 29 --
 1 file changed, 22 insertions(+), 7 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (0812d6c -> b5e4b8c)

2020-10-06 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 0812d6c  [SPARK-33073][PYTHON] Improve error handling on Pandas to 
Arrow conversion failures
 add b5e4b8c  [SPARK-27428][CORE][TEST] Increase receive buffer size used 
in StatsdSinkSuite

No new revisions were added by this update.

Summary of changes:
 .../spark/metrics/sink/StatsdSinkSuite.scala   | 29 --
 1 file changed, 22 insertions(+), 7 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-3.0 updated (4f71231 -> d51b8d6)

2020-10-06 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 4f71231  [SPARK-33073][PYTHON] Improve error handling on Pandas to 
Arrow conversion failures
 add d51b8d6  [SPARK-27428][CORE][TEST] Increase receive buffer size used 
in StatsdSinkSuite

No new revisions were added by this update.

Summary of changes:
 .../spark/metrics/sink/StatsdSinkSuite.scala   | 29 --
 1 file changed, 22 insertions(+), 7 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-3.0 updated (4f71231 -> d51b8d6)

2020-10-06 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 4f71231  [SPARK-33073][PYTHON] Improve error handling on Pandas to 
Arrow conversion failures
 add d51b8d6  [SPARK-27428][CORE][TEST] Increase receive buffer size used 
in StatsdSinkSuite

No new revisions were added by this update.

Summary of changes:
 .../spark/metrics/sink/StatsdSinkSuite.scala   | 29 --
 1 file changed, 22 insertions(+), 7 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (0812d6c -> b5e4b8c)

2020-10-06 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 0812d6c  [SPARK-33073][PYTHON] Improve error handling on Pandas to 
Arrow conversion failures
 add b5e4b8c  [SPARK-27428][CORE][TEST] Increase receive buffer size used 
in StatsdSinkSuite

No new revisions were added by this update.

Summary of changes:
 .../spark/metrics/sink/StatsdSinkSuite.scala   | 29 --
 1 file changed, 22 insertions(+), 7 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-3.0 updated (4f71231 -> d51b8d6)

2020-10-06 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 4f71231  [SPARK-33073][PYTHON] Improve error handling on Pandas to 
Arrow conversion failures
 add d51b8d6  [SPARK-27428][CORE][TEST] Increase receive buffer size used 
in StatsdSinkSuite

No new revisions were added by this update.

Summary of changes:
 .../spark/metrics/sink/StatsdSinkSuite.scala   | 29 --
 1 file changed, 22 insertions(+), 7 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (0812d6c -> b5e4b8c)

2020-10-06 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 0812d6c  [SPARK-33073][PYTHON] Improve error handling on Pandas to 
Arrow conversion failures
 add b5e4b8c  [SPARK-27428][CORE][TEST] Increase receive buffer size used 
in StatsdSinkSuite

No new revisions were added by this update.

Summary of changes:
 .../spark/metrics/sink/StatsdSinkSuite.scala   | 29 --
 1 file changed, 22 insertions(+), 7 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-3.0 updated: [SPARK-33043][ML] Handle spark.driver.maxResultSize=0 in RowMatrix heuristic computation

2020-10-03 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new c9b6271  [SPARK-33043][ML] Handle spark.driver.maxResultSize=0 in 
RowMatrix heuristic computation
c9b6271 is described below

commit c9b62711fdec24160c4bdeff8fc09eedb0b75ee0
Author: Sean Owen 
AuthorDate: Sat Oct 3 13:12:55 2020 -0500

[SPARK-33043][ML] Handle spark.driver.maxResultSize=0 in RowMatrix 
heuristic computation

### What changes were proposed in this pull request?

RowMatrix contains a computation based on spark.driver.maxResultSize. 
However, when this value is set to 0, the computation fails (log of 0). The fix 
is simply to correctly handle this setting, which means unlimited result size, 
by using a tree depth of 1 in the RowMatrix method.

### Why are the changes needed?

Simple bug fix to make several Spark ML functions which use RowMatrix run 
correctly in this case.

### Does this PR introduce _any_ user-facing change?

Not other than the bug fix of course.

### How was this patch tested?

Existing RowMatrix tests plus a new test.

Closes #29925 from srowen/SPARK-33043.

Authored-by: Sean Owen 
Signed-off-by: Sean Owen 
(cherry picked from commit f86171aea43479f54ac2bbbca8f128baa3fc4a8c)
Signed-off-by: Sean Owen 
---
 .../apache/spark/mllib/linalg/distributed/RowMatrix.scala |  6 +-
 .../spark/mllib/linalg/distributed/RowMatrixSuite.scala   | 15 +++
 2 files changed, 20 insertions(+), 1 deletion(-)

diff --git 
a/mllib/src/main/scala/org/apache/spark/mllib/linalg/distributed/RowMatrix.scala
 
b/mllib/src/main/scala/org/apache/spark/mllib/linalg/distributed/RowMatrix.scala
index 20e26ce..07b9d91 100644
--- 
a/mllib/src/main/scala/org/apache/spark/mllib/linalg/distributed/RowMatrix.scala
+++ 
b/mllib/src/main/scala/org/apache/spark/mllib/linalg/distributed/RowMatrix.scala
@@ -786,11 +786,15 @@ class RowMatrix @Since("1.0.0") (
* Based on the formulae: (numPartitions)^(1/depth) * objectSize <= 
DriverMaxResultSize
* @param aggregatedObjectSizeInBytes the size, in megabytes, of the object 
being tree aggregated
*/
-  private[spark] def getTreeAggregateIdealDepth(aggregatedObjectSizeInBytes: 
Long) = {
+  private[spark] def getTreeAggregateIdealDepth(aggregatedObjectSizeInBytes: 
Long): Int = {
 require(aggregatedObjectSizeInBytes > 0,
   "Cannot compute aggregate depth heuristic based on a zero-size object to 
aggregate")
 
 val maxDriverResultSizeInBytes = rows.conf.get[Long](MAX_RESULT_SIZE)
+if (maxDriverResultSizeInBytes <= 0) {
+  // Unlimited result size, so 1 is OK
+  return 1
+}
 
 require(maxDriverResultSizeInBytes > aggregatedObjectSizeInBytes,
   s"Cannot aggregate object of size $aggregatedObjectSizeInBytes Bytes, "
diff --git 
a/mllib/src/test/scala/org/apache/spark/mllib/linalg/distributed/RowMatrixSuite.scala
 
b/mllib/src/test/scala/org/apache/spark/mllib/linalg/distributed/RowMatrixSuite.scala
index 0a4b119..adc4eee 100644
--- 
a/mllib/src/test/scala/org/apache/spark/mllib/linalg/distributed/RowMatrixSuite.scala
+++ 
b/mllib/src/test/scala/org/apache/spark/mllib/linalg/distributed/RowMatrixSuite.scala
@@ -25,6 +25,7 @@ import breeze.linalg.{norm => brzNorm, svd => brzSvd, 
DenseMatrix => BDM, DenseV
 import breeze.numerics.abs
 
 import org.apache.spark.SparkFunSuite
+import org.apache.spark.internal.config.MAX_RESULT_SIZE
 import org.apache.spark.mllib.linalg.{Matrices, Vector, Vectors}
 import org.apache.spark.mllib.random.RandomRDDs
 import org.apache.spark.mllib.util.{LocalClusterSparkContext, 
MLlibTestSparkContext}
@@ -121,6 +122,20 @@ class RowMatrixSuite extends SparkFunSuite with 
MLlibTestSparkContext {
 assert(objectBiggerThanResultSize.getMessage.contains("it's bigger than 
maxResultSize"))
   }
 
+  test("SPARK-33043: getTreeAggregateIdealDepth with unlimited driver size") {
+val originalMaxResultSize = sc.conf.get[Long](MAX_RESULT_SIZE)
+sc.conf.set(MAX_RESULT_SIZE, 0L)
+try {
+  val nbPartitions = 100
+  val vectors = sc.emptyRDD[Vector]
+.repartition(nbPartitions)
+  val rowMat = new RowMatrix(vectors)
+  assert(rowMat.getTreeAggregateIdealDepth(700 * 1024 * 1024) === 1)
+} finally {
+  sc.conf.set(MAX_RESULT_SIZE, originalMaxResultSize)
+}
+  }
+
   test("similar columns") {
 val colMags = Vectors.dense(math.sqrt(126), math.sqrt(66), math.sqrt(94))
 val expected = BDM(


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-3.0 updated: [SPARK-33043][ML] Handle spark.driver.maxResultSize=0 in RowMatrix heuristic computation

2020-10-03 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new c9b6271  [SPARK-33043][ML] Handle spark.driver.maxResultSize=0 in 
RowMatrix heuristic computation
c9b6271 is described below

commit c9b62711fdec24160c4bdeff8fc09eedb0b75ee0
Author: Sean Owen 
AuthorDate: Sat Oct 3 13:12:55 2020 -0500

[SPARK-33043][ML] Handle spark.driver.maxResultSize=0 in RowMatrix 
heuristic computation

### What changes were proposed in this pull request?

RowMatrix contains a computation based on spark.driver.maxResultSize. 
However, when this value is set to 0, the computation fails (log of 0). The fix 
is simply to correctly handle this setting, which means unlimited result size, 
by using a tree depth of 1 in the RowMatrix method.

### Why are the changes needed?

Simple bug fix to make several Spark ML functions which use RowMatrix run 
correctly in this case.

### Does this PR introduce _any_ user-facing change?

Not other than the bug fix of course.

### How was this patch tested?

Existing RowMatrix tests plus a new test.

Closes #29925 from srowen/SPARK-33043.

Authored-by: Sean Owen 
Signed-off-by: Sean Owen 
(cherry picked from commit f86171aea43479f54ac2bbbca8f128baa3fc4a8c)
Signed-off-by: Sean Owen 
---
 .../apache/spark/mllib/linalg/distributed/RowMatrix.scala |  6 +-
 .../spark/mllib/linalg/distributed/RowMatrixSuite.scala   | 15 +++
 2 files changed, 20 insertions(+), 1 deletion(-)

diff --git 
a/mllib/src/main/scala/org/apache/spark/mllib/linalg/distributed/RowMatrix.scala
 
b/mllib/src/main/scala/org/apache/spark/mllib/linalg/distributed/RowMatrix.scala
index 20e26ce..07b9d91 100644
--- 
a/mllib/src/main/scala/org/apache/spark/mllib/linalg/distributed/RowMatrix.scala
+++ 
b/mllib/src/main/scala/org/apache/spark/mllib/linalg/distributed/RowMatrix.scala
@@ -786,11 +786,15 @@ class RowMatrix @Since("1.0.0") (
* Based on the formulae: (numPartitions)^(1/depth) * objectSize <= 
DriverMaxResultSize
* @param aggregatedObjectSizeInBytes the size, in megabytes, of the object 
being tree aggregated
*/
-  private[spark] def getTreeAggregateIdealDepth(aggregatedObjectSizeInBytes: 
Long) = {
+  private[spark] def getTreeAggregateIdealDepth(aggregatedObjectSizeInBytes: 
Long): Int = {
 require(aggregatedObjectSizeInBytes > 0,
   "Cannot compute aggregate depth heuristic based on a zero-size object to 
aggregate")
 
 val maxDriverResultSizeInBytes = rows.conf.get[Long](MAX_RESULT_SIZE)
+if (maxDriverResultSizeInBytes <= 0) {
+  // Unlimited result size, so 1 is OK
+  return 1
+}
 
 require(maxDriverResultSizeInBytes > aggregatedObjectSizeInBytes,
   s"Cannot aggregate object of size $aggregatedObjectSizeInBytes Bytes, "
diff --git 
a/mllib/src/test/scala/org/apache/spark/mllib/linalg/distributed/RowMatrixSuite.scala
 
b/mllib/src/test/scala/org/apache/spark/mllib/linalg/distributed/RowMatrixSuite.scala
index 0a4b119..adc4eee 100644
--- 
a/mllib/src/test/scala/org/apache/spark/mllib/linalg/distributed/RowMatrixSuite.scala
+++ 
b/mllib/src/test/scala/org/apache/spark/mllib/linalg/distributed/RowMatrixSuite.scala
@@ -25,6 +25,7 @@ import breeze.linalg.{norm => brzNorm, svd => brzSvd, 
DenseMatrix => BDM, DenseV
 import breeze.numerics.abs
 
 import org.apache.spark.SparkFunSuite
+import org.apache.spark.internal.config.MAX_RESULT_SIZE
 import org.apache.spark.mllib.linalg.{Matrices, Vector, Vectors}
 import org.apache.spark.mllib.random.RandomRDDs
 import org.apache.spark.mllib.util.{LocalClusterSparkContext, 
MLlibTestSparkContext}
@@ -121,6 +122,20 @@ class RowMatrixSuite extends SparkFunSuite with 
MLlibTestSparkContext {
 assert(objectBiggerThanResultSize.getMessage.contains("it's bigger than 
maxResultSize"))
   }
 
+  test("SPARK-33043: getTreeAggregateIdealDepth with unlimited driver size") {
+val originalMaxResultSize = sc.conf.get[Long](MAX_RESULT_SIZE)
+sc.conf.set(MAX_RESULT_SIZE, 0L)
+try {
+  val nbPartitions = 100
+  val vectors = sc.emptyRDD[Vector]
+.repartition(nbPartitions)
+  val rowMat = new RowMatrix(vectors)
+  assert(rowMat.getTreeAggregateIdealDepth(700 * 1024 * 1024) === 1)
+} finally {
+  sc.conf.set(MAX_RESULT_SIZE, originalMaxResultSize)
+}
+  }
+
   test("similar columns") {
 val colMags = Vectors.dense(math.sqrt(126), math.sqrt(66), math.sqrt(94))
 val expected = BDM(


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (5af62a2 -> f86171a)

2020-10-03 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 5af62a2  [SPARK-33052][SQL][TEST] Make all the database versions 
up-to-date for integration tests
 add f86171a  [SPARK-33043][ML] Handle spark.driver.maxResultSize=0 in 
RowMatrix heuristic computation

No new revisions were added by this update.

Summary of changes:
 .../apache/spark/mllib/linalg/distributed/RowMatrix.scala |  6 +-
 .../spark/mllib/linalg/distributed/RowMatrixSuite.scala   | 15 +++
 2 files changed, 20 insertions(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-3.0 updated: [SPARK-33043][ML] Handle spark.driver.maxResultSize=0 in RowMatrix heuristic computation

2020-10-03 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new c9b6271  [SPARK-33043][ML] Handle spark.driver.maxResultSize=0 in 
RowMatrix heuristic computation
c9b6271 is described below

commit c9b62711fdec24160c4bdeff8fc09eedb0b75ee0
Author: Sean Owen 
AuthorDate: Sat Oct 3 13:12:55 2020 -0500

[SPARK-33043][ML] Handle spark.driver.maxResultSize=0 in RowMatrix 
heuristic computation

### What changes were proposed in this pull request?

RowMatrix contains a computation based on spark.driver.maxResultSize. 
However, when this value is set to 0, the computation fails (log of 0). The fix 
is simply to correctly handle this setting, which means unlimited result size, 
by using a tree depth of 1 in the RowMatrix method.

### Why are the changes needed?

Simple bug fix to make several Spark ML functions which use RowMatrix run 
correctly in this case.

### Does this PR introduce _any_ user-facing change?

Not other than the bug fix of course.

### How was this patch tested?

Existing RowMatrix tests plus a new test.

Closes #29925 from srowen/SPARK-33043.

Authored-by: Sean Owen 
Signed-off-by: Sean Owen 
(cherry picked from commit f86171aea43479f54ac2bbbca8f128baa3fc4a8c)
Signed-off-by: Sean Owen 
---
 .../apache/spark/mllib/linalg/distributed/RowMatrix.scala |  6 +-
 .../spark/mllib/linalg/distributed/RowMatrixSuite.scala   | 15 +++
 2 files changed, 20 insertions(+), 1 deletion(-)

diff --git 
a/mllib/src/main/scala/org/apache/spark/mllib/linalg/distributed/RowMatrix.scala
 
b/mllib/src/main/scala/org/apache/spark/mllib/linalg/distributed/RowMatrix.scala
index 20e26ce..07b9d91 100644
--- 
a/mllib/src/main/scala/org/apache/spark/mllib/linalg/distributed/RowMatrix.scala
+++ 
b/mllib/src/main/scala/org/apache/spark/mllib/linalg/distributed/RowMatrix.scala
@@ -786,11 +786,15 @@ class RowMatrix @Since("1.0.0") (
* Based on the formulae: (numPartitions)^(1/depth) * objectSize <= 
DriverMaxResultSize
* @param aggregatedObjectSizeInBytes the size, in megabytes, of the object 
being tree aggregated
*/
-  private[spark] def getTreeAggregateIdealDepth(aggregatedObjectSizeInBytes: 
Long) = {
+  private[spark] def getTreeAggregateIdealDepth(aggregatedObjectSizeInBytes: 
Long): Int = {
 require(aggregatedObjectSizeInBytes > 0,
   "Cannot compute aggregate depth heuristic based on a zero-size object to 
aggregate")
 
 val maxDriverResultSizeInBytes = rows.conf.get[Long](MAX_RESULT_SIZE)
+if (maxDriverResultSizeInBytes <= 0) {
+  // Unlimited result size, so 1 is OK
+  return 1
+}
 
 require(maxDriverResultSizeInBytes > aggregatedObjectSizeInBytes,
   s"Cannot aggregate object of size $aggregatedObjectSizeInBytes Bytes, "
diff --git 
a/mllib/src/test/scala/org/apache/spark/mllib/linalg/distributed/RowMatrixSuite.scala
 
b/mllib/src/test/scala/org/apache/spark/mllib/linalg/distributed/RowMatrixSuite.scala
index 0a4b119..adc4eee 100644
--- 
a/mllib/src/test/scala/org/apache/spark/mllib/linalg/distributed/RowMatrixSuite.scala
+++ 
b/mllib/src/test/scala/org/apache/spark/mllib/linalg/distributed/RowMatrixSuite.scala
@@ -25,6 +25,7 @@ import breeze.linalg.{norm => brzNorm, svd => brzSvd, 
DenseMatrix => BDM, DenseV
 import breeze.numerics.abs
 
 import org.apache.spark.SparkFunSuite
+import org.apache.spark.internal.config.MAX_RESULT_SIZE
 import org.apache.spark.mllib.linalg.{Matrices, Vector, Vectors}
 import org.apache.spark.mllib.random.RandomRDDs
 import org.apache.spark.mllib.util.{LocalClusterSparkContext, 
MLlibTestSparkContext}
@@ -121,6 +122,20 @@ class RowMatrixSuite extends SparkFunSuite with 
MLlibTestSparkContext {
 assert(objectBiggerThanResultSize.getMessage.contains("it's bigger than 
maxResultSize"))
   }
 
+  test("SPARK-33043: getTreeAggregateIdealDepth with unlimited driver size") {
+val originalMaxResultSize = sc.conf.get[Long](MAX_RESULT_SIZE)
+sc.conf.set(MAX_RESULT_SIZE, 0L)
+try {
+  val nbPartitions = 100
+  val vectors = sc.emptyRDD[Vector]
+.repartition(nbPartitions)
+  val rowMat = new RowMatrix(vectors)
+  assert(rowMat.getTreeAggregateIdealDepth(700 * 1024 * 1024) === 1)
+} finally {
+  sc.conf.set(MAX_RESULT_SIZE, originalMaxResultSize)
+}
+  }
+
   test("similar columns") {
 val colMags = Vectors.dense(math.sqrt(126), math.sqrt(66), math.sqrt(94))
 val expected = BDM(


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (5af62a2 -> f86171a)

2020-10-03 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 5af62a2  [SPARK-33052][SQL][TEST] Make all the database versions 
up-to-date for integration tests
 add f86171a  [SPARK-33043][ML] Handle spark.driver.maxResultSize=0 in 
RowMatrix heuristic computation

No new revisions were added by this update.

Summary of changes:
 .../apache/spark/mllib/linalg/distributed/RowMatrix.scala |  6 +-
 .../spark/mllib/linalg/distributed/RowMatrixSuite.scala   | 15 +++
 2 files changed, 20 insertions(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-3.0 updated: [SPARK-33043][ML] Handle spark.driver.maxResultSize=0 in RowMatrix heuristic computation

2020-10-03 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new c9b6271  [SPARK-33043][ML] Handle spark.driver.maxResultSize=0 in 
RowMatrix heuristic computation
c9b6271 is described below

commit c9b62711fdec24160c4bdeff8fc09eedb0b75ee0
Author: Sean Owen 
AuthorDate: Sat Oct 3 13:12:55 2020 -0500

[SPARK-33043][ML] Handle spark.driver.maxResultSize=0 in RowMatrix 
heuristic computation

### What changes were proposed in this pull request?

RowMatrix contains a computation based on spark.driver.maxResultSize. 
However, when this value is set to 0, the computation fails (log of 0). The fix 
is simply to correctly handle this setting, which means unlimited result size, 
by using a tree depth of 1 in the RowMatrix method.

### Why are the changes needed?

Simple bug fix to make several Spark ML functions which use RowMatrix run 
correctly in this case.

### Does this PR introduce _any_ user-facing change?

Not other than the bug fix of course.

### How was this patch tested?

Existing RowMatrix tests plus a new test.

Closes #29925 from srowen/SPARK-33043.

Authored-by: Sean Owen 
Signed-off-by: Sean Owen 
(cherry picked from commit f86171aea43479f54ac2bbbca8f128baa3fc4a8c)
Signed-off-by: Sean Owen 
---
 .../apache/spark/mllib/linalg/distributed/RowMatrix.scala |  6 +-
 .../spark/mllib/linalg/distributed/RowMatrixSuite.scala   | 15 +++
 2 files changed, 20 insertions(+), 1 deletion(-)

diff --git 
a/mllib/src/main/scala/org/apache/spark/mllib/linalg/distributed/RowMatrix.scala
 
b/mllib/src/main/scala/org/apache/spark/mllib/linalg/distributed/RowMatrix.scala
index 20e26ce..07b9d91 100644
--- 
a/mllib/src/main/scala/org/apache/spark/mllib/linalg/distributed/RowMatrix.scala
+++ 
b/mllib/src/main/scala/org/apache/spark/mllib/linalg/distributed/RowMatrix.scala
@@ -786,11 +786,15 @@ class RowMatrix @Since("1.0.0") (
* Based on the formulae: (numPartitions)^(1/depth) * objectSize <= 
DriverMaxResultSize
* @param aggregatedObjectSizeInBytes the size, in megabytes, of the object 
being tree aggregated
*/
-  private[spark] def getTreeAggregateIdealDepth(aggregatedObjectSizeInBytes: 
Long) = {
+  private[spark] def getTreeAggregateIdealDepth(aggregatedObjectSizeInBytes: 
Long): Int = {
 require(aggregatedObjectSizeInBytes > 0,
   "Cannot compute aggregate depth heuristic based on a zero-size object to 
aggregate")
 
 val maxDriverResultSizeInBytes = rows.conf.get[Long](MAX_RESULT_SIZE)
+if (maxDriverResultSizeInBytes <= 0) {
+  // Unlimited result size, so 1 is OK
+  return 1
+}
 
 require(maxDriverResultSizeInBytes > aggregatedObjectSizeInBytes,
   s"Cannot aggregate object of size $aggregatedObjectSizeInBytes Bytes, "
diff --git 
a/mllib/src/test/scala/org/apache/spark/mllib/linalg/distributed/RowMatrixSuite.scala
 
b/mllib/src/test/scala/org/apache/spark/mllib/linalg/distributed/RowMatrixSuite.scala
index 0a4b119..adc4eee 100644
--- 
a/mllib/src/test/scala/org/apache/spark/mllib/linalg/distributed/RowMatrixSuite.scala
+++ 
b/mllib/src/test/scala/org/apache/spark/mllib/linalg/distributed/RowMatrixSuite.scala
@@ -25,6 +25,7 @@ import breeze.linalg.{norm => brzNorm, svd => brzSvd, 
DenseMatrix => BDM, DenseV
 import breeze.numerics.abs
 
 import org.apache.spark.SparkFunSuite
+import org.apache.spark.internal.config.MAX_RESULT_SIZE
 import org.apache.spark.mllib.linalg.{Matrices, Vector, Vectors}
 import org.apache.spark.mllib.random.RandomRDDs
 import org.apache.spark.mllib.util.{LocalClusterSparkContext, 
MLlibTestSparkContext}
@@ -121,6 +122,20 @@ class RowMatrixSuite extends SparkFunSuite with 
MLlibTestSparkContext {
 assert(objectBiggerThanResultSize.getMessage.contains("it's bigger than 
maxResultSize"))
   }
 
+  test("SPARK-33043: getTreeAggregateIdealDepth with unlimited driver size") {
+val originalMaxResultSize = sc.conf.get[Long](MAX_RESULT_SIZE)
+sc.conf.set(MAX_RESULT_SIZE, 0L)
+try {
+  val nbPartitions = 100
+  val vectors = sc.emptyRDD[Vector]
+.repartition(nbPartitions)
+  val rowMat = new RowMatrix(vectors)
+  assert(rowMat.getTreeAggregateIdealDepth(700 * 1024 * 1024) === 1)
+} finally {
+  sc.conf.set(MAX_RESULT_SIZE, originalMaxResultSize)
+}
+  }
+
   test("similar columns") {
 val colMags = Vectors.dense(math.sqrt(126), math.sqrt(66), math.sqrt(94))
 val expected = BDM(


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (5af62a2 -> f86171a)

2020-10-03 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 5af62a2  [SPARK-33052][SQL][TEST] Make all the database versions 
up-to-date for integration tests
 add f86171a  [SPARK-33043][ML] Handle spark.driver.maxResultSize=0 in 
RowMatrix heuristic computation

No new revisions were added by this update.

Summary of changes:
 .../apache/spark/mllib/linalg/distributed/RowMatrix.scala |  6 +-
 .../spark/mllib/linalg/distributed/RowMatrixSuite.scala   | 15 +++
 2 files changed, 20 insertions(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-3.0 updated: [SPARK-33043][ML] Handle spark.driver.maxResultSize=0 in RowMatrix heuristic computation

2020-10-03 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new c9b6271  [SPARK-33043][ML] Handle spark.driver.maxResultSize=0 in 
RowMatrix heuristic computation
c9b6271 is described below

commit c9b62711fdec24160c4bdeff8fc09eedb0b75ee0
Author: Sean Owen 
AuthorDate: Sat Oct 3 13:12:55 2020 -0500

[SPARK-33043][ML] Handle spark.driver.maxResultSize=0 in RowMatrix 
heuristic computation

### What changes were proposed in this pull request?

RowMatrix contains a computation based on spark.driver.maxResultSize. 
However, when this value is set to 0, the computation fails (log of 0). The fix 
is simply to correctly handle this setting, which means unlimited result size, 
by using a tree depth of 1 in the RowMatrix method.

### Why are the changes needed?

Simple bug fix to make several Spark ML functions which use RowMatrix run 
correctly in this case.

### Does this PR introduce _any_ user-facing change?

Not other than the bug fix of course.

### How was this patch tested?

Existing RowMatrix tests plus a new test.

Closes #29925 from srowen/SPARK-33043.

Authored-by: Sean Owen 
Signed-off-by: Sean Owen 
(cherry picked from commit f86171aea43479f54ac2bbbca8f128baa3fc4a8c)
Signed-off-by: Sean Owen 
---
 .../apache/spark/mllib/linalg/distributed/RowMatrix.scala |  6 +-
 .../spark/mllib/linalg/distributed/RowMatrixSuite.scala   | 15 +++
 2 files changed, 20 insertions(+), 1 deletion(-)

diff --git 
a/mllib/src/main/scala/org/apache/spark/mllib/linalg/distributed/RowMatrix.scala
 
b/mllib/src/main/scala/org/apache/spark/mllib/linalg/distributed/RowMatrix.scala
index 20e26ce..07b9d91 100644
--- 
a/mllib/src/main/scala/org/apache/spark/mllib/linalg/distributed/RowMatrix.scala
+++ 
b/mllib/src/main/scala/org/apache/spark/mllib/linalg/distributed/RowMatrix.scala
@@ -786,11 +786,15 @@ class RowMatrix @Since("1.0.0") (
* Based on the formulae: (numPartitions)^(1/depth) * objectSize <= 
DriverMaxResultSize
* @param aggregatedObjectSizeInBytes the size, in megabytes, of the object 
being tree aggregated
*/
-  private[spark] def getTreeAggregateIdealDepth(aggregatedObjectSizeInBytes: 
Long) = {
+  private[spark] def getTreeAggregateIdealDepth(aggregatedObjectSizeInBytes: 
Long): Int = {
 require(aggregatedObjectSizeInBytes > 0,
   "Cannot compute aggregate depth heuristic based on a zero-size object to 
aggregate")
 
 val maxDriverResultSizeInBytes = rows.conf.get[Long](MAX_RESULT_SIZE)
+if (maxDriverResultSizeInBytes <= 0) {
+  // Unlimited result size, so 1 is OK
+  return 1
+}
 
 require(maxDriverResultSizeInBytes > aggregatedObjectSizeInBytes,
   s"Cannot aggregate object of size $aggregatedObjectSizeInBytes Bytes, "
diff --git 
a/mllib/src/test/scala/org/apache/spark/mllib/linalg/distributed/RowMatrixSuite.scala
 
b/mllib/src/test/scala/org/apache/spark/mllib/linalg/distributed/RowMatrixSuite.scala
index 0a4b119..adc4eee 100644
--- 
a/mllib/src/test/scala/org/apache/spark/mllib/linalg/distributed/RowMatrixSuite.scala
+++ 
b/mllib/src/test/scala/org/apache/spark/mllib/linalg/distributed/RowMatrixSuite.scala
@@ -25,6 +25,7 @@ import breeze.linalg.{norm => brzNorm, svd => brzSvd, 
DenseMatrix => BDM, DenseV
 import breeze.numerics.abs
 
 import org.apache.spark.SparkFunSuite
+import org.apache.spark.internal.config.MAX_RESULT_SIZE
 import org.apache.spark.mllib.linalg.{Matrices, Vector, Vectors}
 import org.apache.spark.mllib.random.RandomRDDs
 import org.apache.spark.mllib.util.{LocalClusterSparkContext, 
MLlibTestSparkContext}
@@ -121,6 +122,20 @@ class RowMatrixSuite extends SparkFunSuite with 
MLlibTestSparkContext {
 assert(objectBiggerThanResultSize.getMessage.contains("it's bigger than 
maxResultSize"))
   }
 
+  test("SPARK-33043: getTreeAggregateIdealDepth with unlimited driver size") {
+val originalMaxResultSize = sc.conf.get[Long](MAX_RESULT_SIZE)
+sc.conf.set(MAX_RESULT_SIZE, 0L)
+try {
+  val nbPartitions = 100
+  val vectors = sc.emptyRDD[Vector]
+.repartition(nbPartitions)
+  val rowMat = new RowMatrix(vectors)
+  assert(rowMat.getTreeAggregateIdealDepth(700 * 1024 * 1024) === 1)
+} finally {
+  sc.conf.set(MAX_RESULT_SIZE, originalMaxResultSize)
+}
+  }
+
   test("similar columns") {
 val colMags = Vectors.dense(math.sqrt(126), math.sqrt(66), math.sqrt(94))
 val expected = BDM(


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (5af62a2 -> f86171a)

2020-10-03 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 5af62a2  [SPARK-33052][SQL][TEST] Make all the database versions 
up-to-date for integration tests
 add f86171a  [SPARK-33043][ML] Handle spark.driver.maxResultSize=0 in 
RowMatrix heuristic computation

No new revisions were added by this update.

Summary of changes:
 .../apache/spark/mllib/linalg/distributed/RowMatrix.scala |  6 +-
 .../spark/mllib/linalg/distributed/RowMatrixSuite.scala   | 15 +++
 2 files changed, 20 insertions(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (5af62a2 -> f86171a)

2020-10-03 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 5af62a2  [SPARK-33052][SQL][TEST] Make all the database versions 
up-to-date for integration tests
 add f86171a  [SPARK-33043][ML] Handle spark.driver.maxResultSize=0 in 
RowMatrix heuristic computation

No new revisions were added by this update.

Summary of changes:
 .../apache/spark/mllib/linalg/distributed/RowMatrix.scala |  6 +-
 .../spark/mllib/linalg/distributed/RowMatrixSuite.scala   | 15 +++
 2 files changed, 20 insertions(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark-website] branch asf-site updated: :rocket: Including ApacheSparkBogotá Meetup on community page :rocket:

2020-10-03 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/spark-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 757cc46  :rocket: Including ApacheSparkBogotá Meetup on community page 
:rocket:
757cc46 is described below

commit 757cc46c85d5b4ad072fe25c32c3dbadc300e3da
Author: miguel diaz 
AuthorDate: Sat Oct 3 10:16:01 2020 -0500

:rocket: Including ApacheSparkBogotá Meetup on community page :rocket:

Hello, I am trying again. :sweat_smile:

I am Co-organizer of Apache Spark Bogotá Meetup from Colombia 
https://www.meetup.com/es/Apache-Spark-Bogota/

And would like to include the community on the following web page 
https://spark.apache.org/community.html

This time I didn't use jekill because as you see new version update a lot 
of things, please let me know if now it is good to go.

I change the .md and the .html community files.

Author: miguel diaz 

Closes #292 from megelon/asbog.
---
 community.md| 5 -
 site/community.html | 3 +++
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/community.md b/community.md
index dca08c0..e8f2cf7 100644
--- a/community.md
+++ b/community.md
@@ -139,9 +139,12 @@ Spark Meetups are grass-roots events organized and hosted 
by individuals in the
 https://www.meetup.com/SanKir-Big-Data-Group/;>Bangalore Spark 
Meetup
   
   
-https://www.meetup.com/Boston-Apache-Spark-User-Group/;>Boston 
Spark Meetup
+https://www.meetup.com/es/Apache-Spark-Bogota/;>Bogotá Spark 
Meetup
   
   
+https://www.meetup.com/Boston-Apache-Spark-User-Group/;>Boston 
Spark Meetup
+
+  
 https://www.meetup.com/Boulder-Denver-Spark-Meetup/;>Boulder/Denver Spark 
Meetup
   
   
diff --git a/site/community.html b/site/community.html
index 337dc8a..f129ac2 100644
--- a/site/community.html
+++ b/site/community.html
@@ -345,6 +345,9 @@ vulnerabilities, and for information on known security 
issues.
 https://www.meetup.com/SanKir-Big-Data-Group/;>Bangalore Spark 
Meetup
   
   
+https://www.meetup.com/es/Apache-Spark-Bogota/;>Bogotá Spark 
Meetup
+  
+  
 https://www.meetup.com/Boston-Apache-Spark-User-Group/;>Boston 
Spark Meetup
   
   


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (e62d247 -> 0059997)

2020-10-01 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from e62d247  [SPARK-32585][SQL] Support scala enumeration in 
ScalaReflection
 add 0059997  [SPARK-33046][DOCS] Update how to build doc for Scala 2.13 
with sbt

No new revisions were added by this update.

Summary of changes:
 docs/building-spark.md | 10 --
 1 file changed, 4 insertions(+), 6 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (e62d247 -> 0059997)

2020-10-01 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from e62d247  [SPARK-32585][SQL] Support scala enumeration in 
ScalaReflection
 add 0059997  [SPARK-33046][DOCS] Update how to build doc for Scala 2.13 
with sbt

No new revisions were added by this update.

Summary of changes:
 docs/building-spark.md | 10 --
 1 file changed, 4 insertions(+), 6 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (e62d247 -> 0059997)

2020-10-01 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from e62d247  [SPARK-32585][SQL] Support scala enumeration in 
ScalaReflection
 add 0059997  [SPARK-33046][DOCS] Update how to build doc for Scala 2.13 
with sbt

No new revisions were added by this update.

Summary of changes:
 docs/building-spark.md | 10 --
 1 file changed, 4 insertions(+), 6 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (e62d247 -> 0059997)

2020-10-01 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from e62d247  [SPARK-32585][SQL] Support scala enumeration in 
ScalaReflection
 add 0059997  [SPARK-33046][DOCS] Update how to build doc for Scala 2.13 
with sbt

No new revisions were added by this update.

Summary of changes:
 docs/building-spark.md | 10 --
 1 file changed, 4 insertions(+), 6 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (e62d247 -> 0059997)

2020-10-01 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from e62d247  [SPARK-32585][SQL] Support scala enumeration in 
ScalaReflection
 add 0059997  [SPARK-33046][DOCS] Update how to build doc for Scala 2.13 
with sbt

No new revisions were added by this update.

Summary of changes:
 docs/building-spark.md | 10 --
 1 file changed, 4 insertions(+), 6 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (d3dbe1a -> 0963fcd)

2020-10-01 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from d3dbe1a  [SQL][DOC][MINOR] Corrects input table names in the examples 
of CREATE FUNCTION doc
 add 0963fcd  [SPARK-33024][SQL] Fix CodeGen fallback issue of UDFSuite in 
Scala 2.13

No new revisions were added by this update.

Summary of changes:
 .../sql/catalyst/expressions/objects/objects.scala | 53 --
 1 file changed, 39 insertions(+), 14 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (d3dbe1a -> 0963fcd)

2020-10-01 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from d3dbe1a  [SQL][DOC][MINOR] Corrects input table names in the examples 
of CREATE FUNCTION doc
 add 0963fcd  [SPARK-33024][SQL] Fix CodeGen fallback issue of UDFSuite in 
Scala 2.13

No new revisions were added by this update.

Summary of changes:
 .../sql/catalyst/expressions/objects/objects.scala | 53 --
 1 file changed, 39 insertions(+), 14 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (d3dbe1a -> 0963fcd)

2020-10-01 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from d3dbe1a  [SQL][DOC][MINOR] Corrects input table names in the examples 
of CREATE FUNCTION doc
 add 0963fcd  [SPARK-33024][SQL] Fix CodeGen fallback issue of UDFSuite in 
Scala 2.13

No new revisions were added by this update.

Summary of changes:
 .../sql/catalyst/expressions/objects/objects.scala | 53 --
 1 file changed, 39 insertions(+), 14 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (d3dbe1a -> 0963fcd)

2020-10-01 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from d3dbe1a  [SQL][DOC][MINOR] Corrects input table names in the examples 
of CREATE FUNCTION doc
 add 0963fcd  [SPARK-33024][SQL] Fix CodeGen fallback issue of UDFSuite in 
Scala 2.13

No new revisions were added by this update.

Summary of changes:
 .../sql/catalyst/expressions/objects/objects.scala | 53 --
 1 file changed, 39 insertions(+), 14 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (d3dbe1a -> 0963fcd)

2020-10-01 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from d3dbe1a  [SQL][DOC][MINOR] Corrects input table names in the examples 
of CREATE FUNCTION doc
 add 0963fcd  [SPARK-33024][SQL] Fix CodeGen fallback issue of UDFSuite in 
Scala 2.13

No new revisions were added by this update.

Summary of changes:
 .../sql/catalyst/expressions/objects/objects.scala | 53 --
 1 file changed, 39 insertions(+), 14 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-3.0 updated: [MINOR][DOCS] Fixing log message for better clarity

2020-09-29 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new 39bfae2  [MINOR][DOCS] Fixing log message for better clarity
39bfae2 is described below

commit 39bfae25979aecbe8058beb2a4882fde9f141eba
Author: Akshat Bordia 
AuthorDate: Tue Sep 29 08:38:43 2020 -0500

[MINOR][DOCS] Fixing log message for better clarity

Fixing log message for better clarity.

Closes #29870 from akshatb1/master.

Lead-authored-by: Akshat Bordia 
Co-authored-by: Akshat Bordia 
Signed-off-by: Sean Owen 
(cherry picked from commit 7766fd13c9e7cb72b97fdfee224d3958fbe882a0)
Signed-off-by: Sean Owen 
---
 core/src/main/scala/org/apache/spark/SparkConf.scala | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/core/src/main/scala/org/apache/spark/SparkConf.scala 
b/core/src/main/scala/org/apache/spark/SparkConf.scala
index 40915e3..802100e 100644
--- a/core/src/main/scala/org/apache/spark/SparkConf.scala
+++ b/core/src/main/scala/org/apache/spark/SparkConf.scala
@@ -577,7 +577,7 @@ class SparkConf(loadDefaults: Boolean) extends Cloneable 
with Logging with Seria
 // If spark.executor.heartbeatInterval bigger than spark.network.timeout,
 // it will almost always cause ExecutorLostFailure. See SPARK-22754.
 require(executorTimeoutThresholdMs > executorHeartbeatIntervalMs, "The 
value of " +
-  s"${networkTimeout}=${executorTimeoutThresholdMs}ms must be no less than 
the value of " +
+  s"${networkTimeout}=${executorTimeoutThresholdMs}ms must be greater than 
the value of " +
   s"${EXECUTOR_HEARTBEAT_INTERVAL.key}=${executorHeartbeatIntervalMs}ms.")
   }
 


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-3.0 updated: [MINOR][DOCS] Fixing log message for better clarity

2020-09-29 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new 39bfae2  [MINOR][DOCS] Fixing log message for better clarity
39bfae2 is described below

commit 39bfae25979aecbe8058beb2a4882fde9f141eba
Author: Akshat Bordia 
AuthorDate: Tue Sep 29 08:38:43 2020 -0500

[MINOR][DOCS] Fixing log message for better clarity

Fixing log message for better clarity.

Closes #29870 from akshatb1/master.

Lead-authored-by: Akshat Bordia 
Co-authored-by: Akshat Bordia 
Signed-off-by: Sean Owen 
(cherry picked from commit 7766fd13c9e7cb72b97fdfee224d3958fbe882a0)
Signed-off-by: Sean Owen 
---
 core/src/main/scala/org/apache/spark/SparkConf.scala | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/core/src/main/scala/org/apache/spark/SparkConf.scala 
b/core/src/main/scala/org/apache/spark/SparkConf.scala
index 40915e3..802100e 100644
--- a/core/src/main/scala/org/apache/spark/SparkConf.scala
+++ b/core/src/main/scala/org/apache/spark/SparkConf.scala
@@ -577,7 +577,7 @@ class SparkConf(loadDefaults: Boolean) extends Cloneable 
with Logging with Seria
 // If spark.executor.heartbeatInterval bigger than spark.network.timeout,
 // it will almost always cause ExecutorLostFailure. See SPARK-22754.
 require(executorTimeoutThresholdMs > executorHeartbeatIntervalMs, "The 
value of " +
-  s"${networkTimeout}=${executorTimeoutThresholdMs}ms must be no less than 
the value of " +
+  s"${networkTimeout}=${executorTimeoutThresholdMs}ms must be greater than 
the value of " +
   s"${EXECUTOR_HEARTBEAT_INTERVAL.key}=${executorHeartbeatIntervalMs}ms.")
   }
 


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (f167002 -> 7766fd1)

2020-09-29 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from f167002  [SPARK-32901][CORE] Do not allocate memory while spilling 
UnsafeExternalSorter
 add 7766fd1  [MINOR][DOCS] Fixing log message for better clarity

No new revisions were added by this update.

Summary of changes:
 core/src/main/scala/org/apache/spark/SparkConf.scala | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-3.0 updated: [MINOR][DOCS] Fixing log message for better clarity

2020-09-29 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new 39bfae2  [MINOR][DOCS] Fixing log message for better clarity
39bfae2 is described below

commit 39bfae25979aecbe8058beb2a4882fde9f141eba
Author: Akshat Bordia 
AuthorDate: Tue Sep 29 08:38:43 2020 -0500

[MINOR][DOCS] Fixing log message for better clarity

Fixing log message for better clarity.

Closes #29870 from akshatb1/master.

Lead-authored-by: Akshat Bordia 
Co-authored-by: Akshat Bordia 
Signed-off-by: Sean Owen 
(cherry picked from commit 7766fd13c9e7cb72b97fdfee224d3958fbe882a0)
Signed-off-by: Sean Owen 
---
 core/src/main/scala/org/apache/spark/SparkConf.scala | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/core/src/main/scala/org/apache/spark/SparkConf.scala 
b/core/src/main/scala/org/apache/spark/SparkConf.scala
index 40915e3..802100e 100644
--- a/core/src/main/scala/org/apache/spark/SparkConf.scala
+++ b/core/src/main/scala/org/apache/spark/SparkConf.scala
@@ -577,7 +577,7 @@ class SparkConf(loadDefaults: Boolean) extends Cloneable 
with Logging with Seria
 // If spark.executor.heartbeatInterval bigger than spark.network.timeout,
 // it will almost always cause ExecutorLostFailure. See SPARK-22754.
 require(executorTimeoutThresholdMs > executorHeartbeatIntervalMs, "The 
value of " +
-  s"${networkTimeout}=${executorTimeoutThresholdMs}ms must be no less than 
the value of " +
+  s"${networkTimeout}=${executorTimeoutThresholdMs}ms must be greater than 
the value of " +
   s"${EXECUTOR_HEARTBEAT_INTERVAL.key}=${executorHeartbeatIntervalMs}ms.")
   }
 


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (f167002 -> 7766fd1)

2020-09-29 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from f167002  [SPARK-32901][CORE] Do not allocate memory while spilling 
UnsafeExternalSorter
 add 7766fd1  [MINOR][DOCS] Fixing log message for better clarity

No new revisions were added by this update.

Summary of changes:
 core/src/main/scala/org/apache/spark/SparkConf.scala | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-3.0 updated: [MINOR][DOCS] Fixing log message for better clarity

2020-09-29 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new 39bfae2  [MINOR][DOCS] Fixing log message for better clarity
39bfae2 is described below

commit 39bfae25979aecbe8058beb2a4882fde9f141eba
Author: Akshat Bordia 
AuthorDate: Tue Sep 29 08:38:43 2020 -0500

[MINOR][DOCS] Fixing log message for better clarity

Fixing log message for better clarity.

Closes #29870 from akshatb1/master.

Lead-authored-by: Akshat Bordia 
Co-authored-by: Akshat Bordia 
Signed-off-by: Sean Owen 
(cherry picked from commit 7766fd13c9e7cb72b97fdfee224d3958fbe882a0)
Signed-off-by: Sean Owen 
---
 core/src/main/scala/org/apache/spark/SparkConf.scala | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/core/src/main/scala/org/apache/spark/SparkConf.scala 
b/core/src/main/scala/org/apache/spark/SparkConf.scala
index 40915e3..802100e 100644
--- a/core/src/main/scala/org/apache/spark/SparkConf.scala
+++ b/core/src/main/scala/org/apache/spark/SparkConf.scala
@@ -577,7 +577,7 @@ class SparkConf(loadDefaults: Boolean) extends Cloneable 
with Logging with Seria
 // If spark.executor.heartbeatInterval bigger than spark.network.timeout,
 // it will almost always cause ExecutorLostFailure. See SPARK-22754.
 require(executorTimeoutThresholdMs > executorHeartbeatIntervalMs, "The 
value of " +
-  s"${networkTimeout}=${executorTimeoutThresholdMs}ms must be no less than 
the value of " +
+  s"${networkTimeout}=${executorTimeoutThresholdMs}ms must be greater than 
the value of " +
   s"${EXECUTOR_HEARTBEAT_INTERVAL.key}=${executorHeartbeatIntervalMs}ms.")
   }
 


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (f167002 -> 7766fd1)

2020-09-29 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from f167002  [SPARK-32901][CORE] Do not allocate memory while spilling 
UnsafeExternalSorter
 add 7766fd1  [MINOR][DOCS] Fixing log message for better clarity

No new revisions were added by this update.

Summary of changes:
 core/src/main/scala/org/apache/spark/SparkConf.scala | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-3.0 updated: [MINOR][DOCS] Fixing log message for better clarity

2020-09-29 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new 39bfae2  [MINOR][DOCS] Fixing log message for better clarity
39bfae2 is described below

commit 39bfae25979aecbe8058beb2a4882fde9f141eba
Author: Akshat Bordia 
AuthorDate: Tue Sep 29 08:38:43 2020 -0500

[MINOR][DOCS] Fixing log message for better clarity

Fixing log message for better clarity.

Closes #29870 from akshatb1/master.

Lead-authored-by: Akshat Bordia 
Co-authored-by: Akshat Bordia 
Signed-off-by: Sean Owen 
(cherry picked from commit 7766fd13c9e7cb72b97fdfee224d3958fbe882a0)
Signed-off-by: Sean Owen 
---
 core/src/main/scala/org/apache/spark/SparkConf.scala | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/core/src/main/scala/org/apache/spark/SparkConf.scala 
b/core/src/main/scala/org/apache/spark/SparkConf.scala
index 40915e3..802100e 100644
--- a/core/src/main/scala/org/apache/spark/SparkConf.scala
+++ b/core/src/main/scala/org/apache/spark/SparkConf.scala
@@ -577,7 +577,7 @@ class SparkConf(loadDefaults: Boolean) extends Cloneable 
with Logging with Seria
 // If spark.executor.heartbeatInterval bigger than spark.network.timeout,
 // it will almost always cause ExecutorLostFailure. See SPARK-22754.
 require(executorTimeoutThresholdMs > executorHeartbeatIntervalMs, "The 
value of " +
-  s"${networkTimeout}=${executorTimeoutThresholdMs}ms must be no less than 
the value of " +
+  s"${networkTimeout}=${executorTimeoutThresholdMs}ms must be greater than 
the value of " +
   s"${EXECUTOR_HEARTBEAT_INTERVAL.key}=${executorHeartbeatIntervalMs}ms.")
   }
 


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (f167002 -> 7766fd1)

2020-09-29 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from f167002  [SPARK-32901][CORE] Do not allocate memory while spilling 
UnsafeExternalSorter
 add 7766fd1  [MINOR][DOCS] Fixing log message for better clarity

No new revisions were added by this update.

Summary of changes:
 core/src/main/scala/org/apache/spark/SparkConf.scala | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (f167002 -> 7766fd1)

2020-09-29 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from f167002  [SPARK-32901][CORE] Do not allocate memory while spilling 
UnsafeExternalSorter
 add 7766fd1  [MINOR][DOCS] Fixing log message for better clarity

No new revisions were added by this update.

Summary of changes:
 core/src/main/scala/org/apache/spark/SparkConf.scala | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (bc77e5b -> bb6d5e7)

2020-09-27 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from bc77e5b  [SPARK-32973][ML][DOC] FeatureHasher does not check 
categoricalCols in inputCols
 add bb6d5e7  [SPARK-32972][ML] Pass all UTs of  `mllib` module in Scala 
2.13

No new revisions were added by this update.

Summary of changes:
 mllib/src/main/scala/org/apache/spark/ml/feature/IDF.scala   | 6 +++---
 .../src/main/scala/org/apache/spark/ml/feature/MinHashLSH.scala  | 2 +-
 mllib/src/main/scala/org/apache/spark/ml/feature/RFormula.scala  | 2 +-
 .../main/scala/org/apache/spark/ml/feature/StringIndexer.scala   | 5 +++--
 .../main/scala/org/apache/spark/ml/feature/VectorIndexer.scala   | 2 +-
 mllib/src/main/scala/org/apache/spark/ml/feature/Word2Vec.scala  | 3 ++-
 mllib/src/main/scala/org/apache/spark/ml/fpm/FPGrowth.scala  | 2 +-
 mllib/src/main/scala/org/apache/spark/ml/fpm/PrefixSpan.scala| 2 +-
 .../scala/org/apache/spark/mllib/classification/NaiveBayes.scala | 4 ++--
 mllib/src/main/scala/org/apache/spark/mllib/fpm/PrefixSpan.scala | 2 +-
 .../spark/mllib/recommendation/MatrixFactorizationModel.scala| 8 
 .../org/apache/spark/mllib/tree/model/DecisionTreeModel.scala| 2 +-
 .../src/test/scala/org/apache/spark/ml/clustering/LDASuite.scala | 4 ++--
 .../spark/ml/feature/BucketedRandomProjectionLSHSuite.scala  | 2 +-
 mllib/src/test/scala/org/apache/spark/ml/feature/LSHTest.scala   | 3 ++-
 .../test/scala/org/apache/spark/ml/feature/MinHashLSHSuite.scala | 2 +-
 .../src/test/scala/org/apache/spark/ml/feature/NGramSuite.scala  | 2 +-
 .../org/apache/spark/ml/feature/StopWordsRemoverSuite.scala  | 8 +---
 mllib/src/test/scala/org/apache/spark/ml/fpm/FPGrowthSuite.scala | 2 +-
 .../apache/spark/ml/regression/RandomForestRegressorSuite.scala  | 2 +-
 mllib/src/test/scala/org/apache/spark/ml/util/MLTestSuite.scala  | 2 +-
 .../scala/org/apache/spark/mllib/feature/Word2VecSuite.scala | 9 ++---
 22 files changed, 42 insertions(+), 34 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (bc77e5b -> bb6d5e7)

2020-09-27 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from bc77e5b  [SPARK-32973][ML][DOC] FeatureHasher does not check 
categoricalCols in inputCols
 add bb6d5e7  [SPARK-32972][ML] Pass all UTs of  `mllib` module in Scala 
2.13

No new revisions were added by this update.

Summary of changes:
 mllib/src/main/scala/org/apache/spark/ml/feature/IDF.scala   | 6 +++---
 .../src/main/scala/org/apache/spark/ml/feature/MinHashLSH.scala  | 2 +-
 mllib/src/main/scala/org/apache/spark/ml/feature/RFormula.scala  | 2 +-
 .../main/scala/org/apache/spark/ml/feature/StringIndexer.scala   | 5 +++--
 .../main/scala/org/apache/spark/ml/feature/VectorIndexer.scala   | 2 +-
 mllib/src/main/scala/org/apache/spark/ml/feature/Word2Vec.scala  | 3 ++-
 mllib/src/main/scala/org/apache/spark/ml/fpm/FPGrowth.scala  | 2 +-
 mllib/src/main/scala/org/apache/spark/ml/fpm/PrefixSpan.scala| 2 +-
 .../scala/org/apache/spark/mllib/classification/NaiveBayes.scala | 4 ++--
 mllib/src/main/scala/org/apache/spark/mllib/fpm/PrefixSpan.scala | 2 +-
 .../spark/mllib/recommendation/MatrixFactorizationModel.scala| 8 
 .../org/apache/spark/mllib/tree/model/DecisionTreeModel.scala| 2 +-
 .../src/test/scala/org/apache/spark/ml/clustering/LDASuite.scala | 4 ++--
 .../spark/ml/feature/BucketedRandomProjectionLSHSuite.scala  | 2 +-
 mllib/src/test/scala/org/apache/spark/ml/feature/LSHTest.scala   | 3 ++-
 .../test/scala/org/apache/spark/ml/feature/MinHashLSHSuite.scala | 2 +-
 .../src/test/scala/org/apache/spark/ml/feature/NGramSuite.scala  | 2 +-
 .../org/apache/spark/ml/feature/StopWordsRemoverSuite.scala  | 8 +---
 mllib/src/test/scala/org/apache/spark/ml/fpm/FPGrowthSuite.scala | 2 +-
 .../apache/spark/ml/regression/RandomForestRegressorSuite.scala  | 2 +-
 mllib/src/test/scala/org/apache/spark/ml/util/MLTestSuite.scala  | 2 +-
 .../scala/org/apache/spark/mllib/feature/Word2VecSuite.scala | 9 ++---
 22 files changed, 42 insertions(+), 34 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (c65b645 -> bc77e5b)

2020-09-27 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from c65b645  [SPARK-32714][FOLLOW-UP][PYTHON] Address pyspark.install 
typing errors
 add bc77e5b  [SPARK-32973][ML][DOC] FeatureHasher does not check 
categoricalCols in inputCols

No new revisions were added by this update.

Summary of changes:
 .../scala/org/apache/spark/ml/feature/FeatureHasher.scala   | 13 ++---
 1 file changed, 10 insertions(+), 3 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (bc77e5b -> bb6d5e7)

2020-09-27 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from bc77e5b  [SPARK-32973][ML][DOC] FeatureHasher does not check 
categoricalCols in inputCols
 add bb6d5e7  [SPARK-32972][ML] Pass all UTs of  `mllib` module in Scala 
2.13

No new revisions were added by this update.

Summary of changes:
 mllib/src/main/scala/org/apache/spark/ml/feature/IDF.scala   | 6 +++---
 .../src/main/scala/org/apache/spark/ml/feature/MinHashLSH.scala  | 2 +-
 mllib/src/main/scala/org/apache/spark/ml/feature/RFormula.scala  | 2 +-
 .../main/scala/org/apache/spark/ml/feature/StringIndexer.scala   | 5 +++--
 .../main/scala/org/apache/spark/ml/feature/VectorIndexer.scala   | 2 +-
 mllib/src/main/scala/org/apache/spark/ml/feature/Word2Vec.scala  | 3 ++-
 mllib/src/main/scala/org/apache/spark/ml/fpm/FPGrowth.scala  | 2 +-
 mllib/src/main/scala/org/apache/spark/ml/fpm/PrefixSpan.scala| 2 +-
 .../scala/org/apache/spark/mllib/classification/NaiveBayes.scala | 4 ++--
 mllib/src/main/scala/org/apache/spark/mllib/fpm/PrefixSpan.scala | 2 +-
 .../spark/mllib/recommendation/MatrixFactorizationModel.scala| 8 
 .../org/apache/spark/mllib/tree/model/DecisionTreeModel.scala| 2 +-
 .../src/test/scala/org/apache/spark/ml/clustering/LDASuite.scala | 4 ++--
 .../spark/ml/feature/BucketedRandomProjectionLSHSuite.scala  | 2 +-
 mllib/src/test/scala/org/apache/spark/ml/feature/LSHTest.scala   | 3 ++-
 .../test/scala/org/apache/spark/ml/feature/MinHashLSHSuite.scala | 2 +-
 .../src/test/scala/org/apache/spark/ml/feature/NGramSuite.scala  | 2 +-
 .../org/apache/spark/ml/feature/StopWordsRemoverSuite.scala  | 8 +---
 mllib/src/test/scala/org/apache/spark/ml/fpm/FPGrowthSuite.scala | 2 +-
 .../apache/spark/ml/regression/RandomForestRegressorSuite.scala  | 2 +-
 mllib/src/test/scala/org/apache/spark/ml/util/MLTestSuite.scala  | 2 +-
 .../scala/org/apache/spark/mllib/feature/Word2VecSuite.scala | 9 ++---
 22 files changed, 42 insertions(+), 34 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (c65b645 -> bc77e5b)

2020-09-27 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from c65b645  [SPARK-32714][FOLLOW-UP][PYTHON] Address pyspark.install 
typing errors
 add bc77e5b  [SPARK-32973][ML][DOC] FeatureHasher does not check 
categoricalCols in inputCols

No new revisions were added by this update.

Summary of changes:
 .../scala/org/apache/spark/ml/feature/FeatureHasher.scala   | 13 ++---
 1 file changed, 10 insertions(+), 3 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (bc77e5b -> bb6d5e7)

2020-09-27 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from bc77e5b  [SPARK-32973][ML][DOC] FeatureHasher does not check 
categoricalCols in inputCols
 add bb6d5e7  [SPARK-32972][ML] Pass all UTs of  `mllib` module in Scala 
2.13

No new revisions were added by this update.

Summary of changes:
 mllib/src/main/scala/org/apache/spark/ml/feature/IDF.scala   | 6 +++---
 .../src/main/scala/org/apache/spark/ml/feature/MinHashLSH.scala  | 2 +-
 mllib/src/main/scala/org/apache/spark/ml/feature/RFormula.scala  | 2 +-
 .../main/scala/org/apache/spark/ml/feature/StringIndexer.scala   | 5 +++--
 .../main/scala/org/apache/spark/ml/feature/VectorIndexer.scala   | 2 +-
 mllib/src/main/scala/org/apache/spark/ml/feature/Word2Vec.scala  | 3 ++-
 mllib/src/main/scala/org/apache/spark/ml/fpm/FPGrowth.scala  | 2 +-
 mllib/src/main/scala/org/apache/spark/ml/fpm/PrefixSpan.scala| 2 +-
 .../scala/org/apache/spark/mllib/classification/NaiveBayes.scala | 4 ++--
 mllib/src/main/scala/org/apache/spark/mllib/fpm/PrefixSpan.scala | 2 +-
 .../spark/mllib/recommendation/MatrixFactorizationModel.scala| 8 
 .../org/apache/spark/mllib/tree/model/DecisionTreeModel.scala| 2 +-
 .../src/test/scala/org/apache/spark/ml/clustering/LDASuite.scala | 4 ++--
 .../spark/ml/feature/BucketedRandomProjectionLSHSuite.scala  | 2 +-
 mllib/src/test/scala/org/apache/spark/ml/feature/LSHTest.scala   | 3 ++-
 .../test/scala/org/apache/spark/ml/feature/MinHashLSHSuite.scala | 2 +-
 .../src/test/scala/org/apache/spark/ml/feature/NGramSuite.scala  | 2 +-
 .../org/apache/spark/ml/feature/StopWordsRemoverSuite.scala  | 8 +---
 mllib/src/test/scala/org/apache/spark/ml/fpm/FPGrowthSuite.scala | 2 +-
 .../apache/spark/ml/regression/RandomForestRegressorSuite.scala  | 2 +-
 mllib/src/test/scala/org/apache/spark/ml/util/MLTestSuite.scala  | 2 +-
 .../scala/org/apache/spark/mllib/feature/Word2VecSuite.scala | 9 ++---
 22 files changed, 42 insertions(+), 34 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (c65b645 -> bc77e5b)

2020-09-27 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from c65b645  [SPARK-32714][FOLLOW-UP][PYTHON] Address pyspark.install 
typing errors
 add bc77e5b  [SPARK-32973][ML][DOC] FeatureHasher does not check 
categoricalCols in inputCols

No new revisions were added by this update.

Summary of changes:
 .../scala/org/apache/spark/ml/feature/FeatureHasher.scala   | 13 ++---
 1 file changed, 10 insertions(+), 3 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (bc77e5b -> bb6d5e7)

2020-09-27 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from bc77e5b  [SPARK-32973][ML][DOC] FeatureHasher does not check 
categoricalCols in inputCols
 add bb6d5e7  [SPARK-32972][ML] Pass all UTs of  `mllib` module in Scala 
2.13

No new revisions were added by this update.

Summary of changes:
 mllib/src/main/scala/org/apache/spark/ml/feature/IDF.scala   | 6 +++---
 .../src/main/scala/org/apache/spark/ml/feature/MinHashLSH.scala  | 2 +-
 mllib/src/main/scala/org/apache/spark/ml/feature/RFormula.scala  | 2 +-
 .../main/scala/org/apache/spark/ml/feature/StringIndexer.scala   | 5 +++--
 .../main/scala/org/apache/spark/ml/feature/VectorIndexer.scala   | 2 +-
 mllib/src/main/scala/org/apache/spark/ml/feature/Word2Vec.scala  | 3 ++-
 mllib/src/main/scala/org/apache/spark/ml/fpm/FPGrowth.scala  | 2 +-
 mllib/src/main/scala/org/apache/spark/ml/fpm/PrefixSpan.scala| 2 +-
 .../scala/org/apache/spark/mllib/classification/NaiveBayes.scala | 4 ++--
 mllib/src/main/scala/org/apache/spark/mllib/fpm/PrefixSpan.scala | 2 +-
 .../spark/mllib/recommendation/MatrixFactorizationModel.scala| 8 
 .../org/apache/spark/mllib/tree/model/DecisionTreeModel.scala| 2 +-
 .../src/test/scala/org/apache/spark/ml/clustering/LDASuite.scala | 4 ++--
 .../spark/ml/feature/BucketedRandomProjectionLSHSuite.scala  | 2 +-
 mllib/src/test/scala/org/apache/spark/ml/feature/LSHTest.scala   | 3 ++-
 .../test/scala/org/apache/spark/ml/feature/MinHashLSHSuite.scala | 2 +-
 .../src/test/scala/org/apache/spark/ml/feature/NGramSuite.scala  | 2 +-
 .../org/apache/spark/ml/feature/StopWordsRemoverSuite.scala  | 8 +---
 mllib/src/test/scala/org/apache/spark/ml/fpm/FPGrowthSuite.scala | 2 +-
 .../apache/spark/ml/regression/RandomForestRegressorSuite.scala  | 2 +-
 mllib/src/test/scala/org/apache/spark/ml/util/MLTestSuite.scala  | 2 +-
 .../scala/org/apache/spark/mllib/feature/Word2VecSuite.scala | 9 ++---
 22 files changed, 42 insertions(+), 34 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (c65b645 -> bc77e5b)

2020-09-27 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from c65b645  [SPARK-32714][FOLLOW-UP][PYTHON] Address pyspark.install 
typing errors
 add bc77e5b  [SPARK-32973][ML][DOC] FeatureHasher does not check 
categoricalCols in inputCols

No new revisions were added by this update.

Summary of changes:
 .../scala/org/apache/spark/ml/feature/FeatureHasher.scala   | 13 ++---
 1 file changed, 10 insertions(+), 3 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (c65b645 -> bc77e5b)

2020-09-27 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from c65b645  [SPARK-32714][FOLLOW-UP][PYTHON] Address pyspark.install 
typing errors
 add bc77e5b  [SPARK-32973][ML][DOC] FeatureHasher does not check 
categoricalCols in inputCols

No new revisions were added by this update.

Summary of changes:
 .../scala/org/apache/spark/ml/feature/FeatureHasher.scala   | 13 ++---
 1 file changed, 10 insertions(+), 3 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (6c80547 -> 934a91f)

2020-09-26 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 6c80547  [SPARK-32997][K8S] Support dynamic PVC creation and deletion 
in K8s driver
 add 934a91f  [SPARK-21481][ML][FOLLOWUP][TRIVIAL] HashingTF use 
util.collection.OpenHashMap instead of mutable.HashMap

No new revisions were added by this update.

Summary of changes:
 .../org/apache/spark/ml/feature/HashingTF.scala  | 20 ++--
 1 file changed, 6 insertions(+), 14 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (6c80547 -> 934a91f)

2020-09-26 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 6c80547  [SPARK-32997][K8S] Support dynamic PVC creation and deletion 
in K8s driver
 add 934a91f  [SPARK-21481][ML][FOLLOWUP][TRIVIAL] HashingTF use 
util.collection.OpenHashMap instead of mutable.HashMap

No new revisions were added by this update.

Summary of changes:
 .../org/apache/spark/ml/feature/HashingTF.scala  | 20 ++--
 1 file changed, 6 insertions(+), 14 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (6c80547 -> 934a91f)

2020-09-26 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 6c80547  [SPARK-32997][K8S] Support dynamic PVC creation and deletion 
in K8s driver
 add 934a91f  [SPARK-21481][ML][FOLLOWUP][TRIVIAL] HashingTF use 
util.collection.OpenHashMap instead of mutable.HashMap

No new revisions were added by this update.

Summary of changes:
 .../org/apache/spark/ml/feature/HashingTF.scala  | 20 ++--
 1 file changed, 6 insertions(+), 14 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (6c80547 -> 934a91f)

2020-09-26 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 6c80547  [SPARK-32997][K8S] Support dynamic PVC creation and deletion 
in K8s driver
 add 934a91f  [SPARK-21481][ML][FOLLOWUP][TRIVIAL] HashingTF use 
util.collection.OpenHashMap instead of mutable.HashMap

No new revisions were added by this update.

Summary of changes:
 .../org/apache/spark/ml/feature/HashingTF.scala  | 20 ++--
 1 file changed, 6 insertions(+), 14 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated: [SPARK-21481][ML][FOLLOWUP][TRIVIAL] HashingTF use util.collection.OpenHashMap instead of mutable.HashMap

2020-09-26 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new 934a91f  [SPARK-21481][ML][FOLLOWUP][TRIVIAL] HashingTF use 
util.collection.OpenHashMap instead of mutable.HashMap
934a91f is described below

commit 934a91fcb4de1e5c4b93b58e7452afa4bb4a9586
Author: zhengruifeng 
AuthorDate: Sat Sep 26 08:16:39 2020 -0500

[SPARK-21481][ML][FOLLOWUP][TRIVIAL] HashingTF use 
util.collection.OpenHashMap instead of mutable.HashMap

### What changes were proposed in this pull request?
`HashingTF` use `util.collection.OpenHashMap` instead of `mutable.HashMap`

### Why are the changes needed?
according to `util.collection.OpenHashMap` 's doc:

> This map is about 5X faster than java.util.HashMap, while using much less 
space overhead.

according to performance tests like ([Simple microbenchmarks comparing 
Scala vs Java mutable map performance 
](https://gist.github.com/pchiusano/1423303)), `mutable.HashMap` maybe more 
inefficient than `java.util.HashMap`

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
existing testsuites

Closes #29852 from zhengruifeng/hashingtf_opt.

Authored-by: zhengruifeng 
Signed-off-by: Sean Owen 
---
 .../org/apache/spark/ml/feature/HashingTF.scala  | 20 ++--
 1 file changed, 6 insertions(+), 14 deletions(-)

diff --git a/mllib/src/main/scala/org/apache/spark/ml/feature/HashingTF.scala 
b/mllib/src/main/scala/org/apache/spark/ml/feature/HashingTF.scala
index d2bb013..f4223bc 100644
--- a/mllib/src/main/scala/org/apache/spark/ml/feature/HashingTF.scala
+++ b/mllib/src/main/scala/org/apache/spark/ml/feature/HashingTF.scala
@@ -17,8 +17,6 @@
 
 package org.apache.spark.ml.feature
 
-import scala.collection.mutable
-
 import org.apache.spark.annotation.Since
 import org.apache.spark.ml.Transformer
 import org.apache.spark.ml.attribute.AttributeGroup
@@ -32,6 +30,7 @@ import org.apache.spark.sql.functions.{col, udf}
 import org.apache.spark.sql.types.{ArrayType, StructType}
 import org.apache.spark.util.Utils
 import org.apache.spark.util.VersionUtils.majorMinorVersion
+import org.apache.spark.util.collection.OpenHashMap
 
 /**
  * Maps a sequence of terms to their term frequencies using the hashing trick.
@@ -91,20 +90,13 @@ class HashingTF @Since("3.0.0") private[ml] (
   @Since("2.0.0")
   override def transform(dataset: Dataset[_]): DataFrame = {
 val outputSchema = transformSchema(dataset.schema)
-val localNumFeatures = $(numFeatures)
-val localBinary = $(binary)
+val n = $(numFeatures)
+val updateFunc = if ($(binary)) (v: Double) => 1.0 else (v: Double) => v + 
1.0
 
 val hashUDF = udf { terms: Seq[_] =>
-  val termFrequencies = mutable.HashMap.empty[Int, 
Double].withDefaultValue(0.0)
-  terms.foreach { term =>
-val i = indexOf(term)
-if (localBinary) {
-  termFrequencies(i) = 1.0
-} else {
-  termFrequencies(i) += 1.0
-}
-  }
-  Vectors.sparse(localNumFeatures, termFrequencies.toSeq)
+  val map = new OpenHashMap[Int, Double]()
+  terms.foreach { term => map.changeValue(indexOf(term), 1.0, updateFunc) }
+  Vectors.sparse(n, map.toSeq)
 }
 
 dataset.withColumn($(outputCol), hashUDF(col($(inputCol))),


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-2.4 updated (1366443 -> cd3caab)

2020-09-25 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch branch-2.4
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 1366443  [MINOR][SQL][2.4] Improve examples for `percentile_approx()`
 add cd3caab  [SPARK-32886][SPARK-31882][WEBUI][2.4] fix 'undefined' link 
in event timeline view

No new revisions were added by this update.

Summary of changes:
 .../org/apache/spark/ui/static/spark-dag-viz.js|  9 ++--
 .../org/apache/spark/ui/static/timeline-view.js| 53 ++
 .../resources/org/apache/spark/ui/static/webui.js  |  7 ++-
 .../main/scala/org/apache/spark/ui/UIUtils.scala   |  1 +
 4 files changed, 43 insertions(+), 27 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-2.4 updated: [SPARK-32886][SPARK-31882][WEBUI][2.4] fix 'undefined' link in event timeline view

2020-09-25 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch branch-2.4
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-2.4 by this push:
 new cd3caab  [SPARK-32886][SPARK-31882][WEBUI][2.4] fix 'undefined' link 
in event timeline view
cd3caab is described below

commit cd3caabea60cdbf9f131c12f7225bd97581da659
Author: Zhen Li 
AuthorDate: Fri Sep 25 08:34:19 2020 -0500

[SPARK-32886][SPARK-31882][WEBUI][2.4] fix 'undefined' link in event 
timeline view

### What changes were proposed in this pull request?

Fix 
[SPARK-32886](https://issues.apache.org/jira/projects/SPARK/issues/SPARK-32886) 
in branch-2.4. i cherry-pick 
[29757](https://github.com/apache/spark/pull/29757) and partial 
[28690](https://github.com/apache/spark/pull/28690)(test part is ignored as 
conflict), which PR `29757` has dependency on PR `28690`. This change fixes two 
below issues in branch-2.4.
[SPARK-31882: DAG-viz is not rendered correctly with 
pagination.](https://issues.apache.org/jira/projects/SPARK/issues/SPARK-31882)
[SPARK-32886: '.../jobs/undefined' link from "Event Timeline" in jobs 
page](https://issues.apache.org/jira/projects/SPARK/issues/SPARK-32886)
### Why are the changes needed?

sarutak found `29757` has dependency on `28690`. If we only merge `29757` 
to 2.4 branch, it would cause UI break. And I verified both issues mentioned in 
[SPARK-32886](https://issues.apache.org/jira/projects/SPARK/issues/SPARK-32886) 
and 
[SPARK-31882](https://issues.apache.org/jira/projects/SPARK/issues/SPARK-31882) 
 exist in branch-2.4. So i cherry pick them to branch 2.4 in same PR.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Manually tested.

![dag](https://user-images.githubusercontent.com/10524738/93854440-9c770480-fc6a-11ea-9009-0ee68ef090e1.JPG)

![eventTLJPG](https://user-images.githubusercontent.com/10524738/93854447-a13bb880-fc6a-11ea-9b34-73623fa59def.JPG)

Closes #29833 from zhli1142015/cherry-pick-fix-for-31882-32886.

Lead-authored-by: Zhen Li 
Co-authored-by: Kousuke Saruta 
Signed-off-by: Sean Owen 
---
 .../org/apache/spark/ui/static/spark-dag-viz.js|  9 ++--
 .../org/apache/spark/ui/static/timeline-view.js| 53 ++
 .../resources/org/apache/spark/ui/static/webui.js  |  7 ++-
 .../main/scala/org/apache/spark/ui/UIUtils.scala   |  1 +
 4 files changed, 43 insertions(+), 27 deletions(-)

diff --git 
a/core/src/main/resources/org/apache/spark/ui/static/spark-dag-viz.js 
b/core/src/main/resources/org/apache/spark/ui/static/spark-dag-viz.js
index 75b959f..990b2f8 100644
--- a/core/src/main/resources/org/apache/spark/ui/static/spark-dag-viz.js
+++ b/core/src/main/resources/org/apache/spark/ui/static/spark-dag-viz.js
@@ -210,7 +210,7 @@ function renderDagVizForJob(svgContainer) {
 var dot = metadata.select(".dot-file").text();
 var stageId = metadata.attr("stage-id");
 var containerId = VizConstants.graphPrefix + stageId;
-var isSkipped = metadata.attr("skipped") == "true";
+var isSkipped = metadata.attr("skipped") === "true";
 var container;
 if (isSkipped) {
   container = svgContainer
@@ -219,11 +219,8 @@ function renderDagVizForJob(svgContainer) {
 .attr("skipped", "true");
 } else {
   // Link each graph to the corresponding stage page (TODO: handle stage 
attempts)
-  // Use the link from the stage table so it also works for the history 
server
-  var attemptId = 0
-  var stageLink = d3.select("#stage-" + stageId + "-" + attemptId)
-.select("a.name-link")
-.attr("href");
+  var attemptId = 0;
+  var stageLink = uiRoot + appBasePath + "/stages/stage/?id=" + stageId + 
"=" + attemptId;
   container = svgContainer
 .append("a")
 .attr("xlink:href", stageLink)
diff --git 
a/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js 
b/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js
index 5be8cff..220b76a 100644
--- a/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js
+++ b/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js
@@ -42,26 +42,31 @@ function drawApplicationTimeline(groupArray, eventObjArray, 
startTime, offset) {
   setupZoomable("#application-timeline-zoom-lock", applicationTimeline);
   setupExecutorEventAction();
 
+  function getIdForJobEntry(baseElem) {
+var jobIdText = 
$($(baseElem).find(".application-timeline-content")[0]).text();
+var jobId = jobIdText.match("\\(Job (\\d+)\\)$")[1];
+return jobId;
+  }
+
+  function getSelectorForJobEntry(jobId

[spark] branch branch-2.4 updated: [SPARK-32886][SPARK-31882][WEBUI][2.4] fix 'undefined' link in event timeline view

2020-09-25 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch branch-2.4
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-2.4 by this push:
 new cd3caab  [SPARK-32886][SPARK-31882][WEBUI][2.4] fix 'undefined' link 
in event timeline view
cd3caab is described below

commit cd3caabea60cdbf9f131c12f7225bd97581da659
Author: Zhen Li 
AuthorDate: Fri Sep 25 08:34:19 2020 -0500

[SPARK-32886][SPARK-31882][WEBUI][2.4] fix 'undefined' link in event 
timeline view

### What changes were proposed in this pull request?

Fix 
[SPARK-32886](https://issues.apache.org/jira/projects/SPARK/issues/SPARK-32886) 
in branch-2.4. i cherry-pick 
[29757](https://github.com/apache/spark/pull/29757) and partial 
[28690](https://github.com/apache/spark/pull/28690)(test part is ignored as 
conflict), which PR `29757` has dependency on PR `28690`. This change fixes two 
below issues in branch-2.4.
[SPARK-31882: DAG-viz is not rendered correctly with 
pagination.](https://issues.apache.org/jira/projects/SPARK/issues/SPARK-31882)
[SPARK-32886: '.../jobs/undefined' link from "Event Timeline" in jobs 
page](https://issues.apache.org/jira/projects/SPARK/issues/SPARK-32886)
### Why are the changes needed?

sarutak found `29757` has dependency on `28690`. If we only merge `29757` 
to 2.4 branch, it would cause UI break. And I verified both issues mentioned in 
[SPARK-32886](https://issues.apache.org/jira/projects/SPARK/issues/SPARK-32886) 
and 
[SPARK-31882](https://issues.apache.org/jira/projects/SPARK/issues/SPARK-31882) 
 exist in branch-2.4. So i cherry pick them to branch 2.4 in same PR.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Manually tested.

![dag](https://user-images.githubusercontent.com/10524738/93854440-9c770480-fc6a-11ea-9009-0ee68ef090e1.JPG)

![eventTLJPG](https://user-images.githubusercontent.com/10524738/93854447-a13bb880-fc6a-11ea-9b34-73623fa59def.JPG)

Closes #29833 from zhli1142015/cherry-pick-fix-for-31882-32886.

Lead-authored-by: Zhen Li 
Co-authored-by: Kousuke Saruta 
Signed-off-by: Sean Owen 
---
 .../org/apache/spark/ui/static/spark-dag-viz.js|  9 ++--
 .../org/apache/spark/ui/static/timeline-view.js| 53 ++
 .../resources/org/apache/spark/ui/static/webui.js  |  7 ++-
 .../main/scala/org/apache/spark/ui/UIUtils.scala   |  1 +
 4 files changed, 43 insertions(+), 27 deletions(-)

diff --git 
a/core/src/main/resources/org/apache/spark/ui/static/spark-dag-viz.js 
b/core/src/main/resources/org/apache/spark/ui/static/spark-dag-viz.js
index 75b959f..990b2f8 100644
--- a/core/src/main/resources/org/apache/spark/ui/static/spark-dag-viz.js
+++ b/core/src/main/resources/org/apache/spark/ui/static/spark-dag-viz.js
@@ -210,7 +210,7 @@ function renderDagVizForJob(svgContainer) {
 var dot = metadata.select(".dot-file").text();
 var stageId = metadata.attr("stage-id");
 var containerId = VizConstants.graphPrefix + stageId;
-var isSkipped = metadata.attr("skipped") == "true";
+var isSkipped = metadata.attr("skipped") === "true";
 var container;
 if (isSkipped) {
   container = svgContainer
@@ -219,11 +219,8 @@ function renderDagVizForJob(svgContainer) {
 .attr("skipped", "true");
 } else {
   // Link each graph to the corresponding stage page (TODO: handle stage 
attempts)
-  // Use the link from the stage table so it also works for the history 
server
-  var attemptId = 0
-  var stageLink = d3.select("#stage-" + stageId + "-" + attemptId)
-.select("a.name-link")
-.attr("href");
+  var attemptId = 0;
+  var stageLink = uiRoot + appBasePath + "/stages/stage/?id=" + stageId + 
"=" + attemptId;
   container = svgContainer
 .append("a")
 .attr("xlink:href", stageLink)
diff --git 
a/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js 
b/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js
index 5be8cff..220b76a 100644
--- a/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js
+++ b/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js
@@ -42,26 +42,31 @@ function drawApplicationTimeline(groupArray, eventObjArray, 
startTime, offset) {
   setupZoomable("#application-timeline-zoom-lock", applicationTimeline);
   setupExecutorEventAction();
 
+  function getIdForJobEntry(baseElem) {
+var jobIdText = 
$($(baseElem).find(".application-timeline-content")[0]).text();
+var jobId = jobIdText.match("\\(Job (\\d+)\\)$")[1];
+return jobId;
+  }
+
+  function getSelectorForJobEntry(jobId

[spark] branch branch-2.4 updated: [SPARK-32886][SPARK-31882][WEBUI][2.4] fix 'undefined' link in event timeline view

2020-09-25 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch branch-2.4
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-2.4 by this push:
 new cd3caab  [SPARK-32886][SPARK-31882][WEBUI][2.4] fix 'undefined' link 
in event timeline view
cd3caab is described below

commit cd3caabea60cdbf9f131c12f7225bd97581da659
Author: Zhen Li 
AuthorDate: Fri Sep 25 08:34:19 2020 -0500

[SPARK-32886][SPARK-31882][WEBUI][2.4] fix 'undefined' link in event 
timeline view

### What changes were proposed in this pull request?

Fix 
[SPARK-32886](https://issues.apache.org/jira/projects/SPARK/issues/SPARK-32886) 
in branch-2.4. i cherry-pick 
[29757](https://github.com/apache/spark/pull/29757) and partial 
[28690](https://github.com/apache/spark/pull/28690)(test part is ignored as 
conflict), which PR `29757` has dependency on PR `28690`. This change fixes two 
below issues in branch-2.4.
[SPARK-31882: DAG-viz is not rendered correctly with 
pagination.](https://issues.apache.org/jira/projects/SPARK/issues/SPARK-31882)
[SPARK-32886: '.../jobs/undefined' link from "Event Timeline" in jobs 
page](https://issues.apache.org/jira/projects/SPARK/issues/SPARK-32886)
### Why are the changes needed?

sarutak found `29757` has dependency on `28690`. If we only merge `29757` 
to 2.4 branch, it would cause UI break. And I verified both issues mentioned in 
[SPARK-32886](https://issues.apache.org/jira/projects/SPARK/issues/SPARK-32886) 
and 
[SPARK-31882](https://issues.apache.org/jira/projects/SPARK/issues/SPARK-31882) 
 exist in branch-2.4. So i cherry pick them to branch 2.4 in same PR.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Manually tested.

![dag](https://user-images.githubusercontent.com/10524738/93854440-9c770480-fc6a-11ea-9009-0ee68ef090e1.JPG)

![eventTLJPG](https://user-images.githubusercontent.com/10524738/93854447-a13bb880-fc6a-11ea-9b34-73623fa59def.JPG)

Closes #29833 from zhli1142015/cherry-pick-fix-for-31882-32886.

Lead-authored-by: Zhen Li 
Co-authored-by: Kousuke Saruta 
Signed-off-by: Sean Owen 
---
 .../org/apache/spark/ui/static/spark-dag-viz.js|  9 ++--
 .../org/apache/spark/ui/static/timeline-view.js| 53 ++
 .../resources/org/apache/spark/ui/static/webui.js  |  7 ++-
 .../main/scala/org/apache/spark/ui/UIUtils.scala   |  1 +
 4 files changed, 43 insertions(+), 27 deletions(-)

diff --git 
a/core/src/main/resources/org/apache/spark/ui/static/spark-dag-viz.js 
b/core/src/main/resources/org/apache/spark/ui/static/spark-dag-viz.js
index 75b959f..990b2f8 100644
--- a/core/src/main/resources/org/apache/spark/ui/static/spark-dag-viz.js
+++ b/core/src/main/resources/org/apache/spark/ui/static/spark-dag-viz.js
@@ -210,7 +210,7 @@ function renderDagVizForJob(svgContainer) {
 var dot = metadata.select(".dot-file").text();
 var stageId = metadata.attr("stage-id");
 var containerId = VizConstants.graphPrefix + stageId;
-var isSkipped = metadata.attr("skipped") == "true";
+var isSkipped = metadata.attr("skipped") === "true";
 var container;
 if (isSkipped) {
   container = svgContainer
@@ -219,11 +219,8 @@ function renderDagVizForJob(svgContainer) {
 .attr("skipped", "true");
 } else {
   // Link each graph to the corresponding stage page (TODO: handle stage 
attempts)
-  // Use the link from the stage table so it also works for the history 
server
-  var attemptId = 0
-  var stageLink = d3.select("#stage-" + stageId + "-" + attemptId)
-.select("a.name-link")
-.attr("href");
+  var attemptId = 0;
+  var stageLink = uiRoot + appBasePath + "/stages/stage/?id=" + stageId + 
"=" + attemptId;
   container = svgContainer
 .append("a")
 .attr("xlink:href", stageLink)
diff --git 
a/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js 
b/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js
index 5be8cff..220b76a 100644
--- a/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js
+++ b/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js
@@ -42,26 +42,31 @@ function drawApplicationTimeline(groupArray, eventObjArray, 
startTime, offset) {
   setupZoomable("#application-timeline-zoom-lock", applicationTimeline);
   setupExecutorEventAction();
 
+  function getIdForJobEntry(baseElem) {
+var jobIdText = 
$($(baseElem).find(".application-timeline-content")[0]).text();
+var jobId = jobIdText.match("\\(Job (\\d+)\\)$")[1];
+return jobId;
+  }
+
+  function getSelectorForJobEntry(jobId

[spark] branch branch-2.4 updated (1366443 -> cd3caab)

2020-09-25 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch branch-2.4
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 1366443  [MINOR][SQL][2.4] Improve examples for `percentile_approx()`
 add cd3caab  [SPARK-32886][SPARK-31882][WEBUI][2.4] fix 'undefined' link 
in event timeline view

No new revisions were added by this update.

Summary of changes:
 .../org/apache/spark/ui/static/spark-dag-viz.js|  9 ++--
 .../org/apache/spark/ui/static/timeline-view.js| 53 ++
 .../resources/org/apache/spark/ui/static/webui.js  |  7 ++-
 .../main/scala/org/apache/spark/ui/UIUtils.scala   |  1 +
 4 files changed, 43 insertions(+), 27 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (383bb4a -> faeb71b)

2020-09-23 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 383bb4a  [SPARK-32892][CORE][SQL] Fix hash functions on big-endian 
platforms
 add faeb71b  [SPARK-32950][SQL] Remove unnecessary big-endian code paths

No new revisions were added by this update.

Summary of changes:
 .../execution/vectorized/OffHeapColumnVector.java  | 24 --
 .../execution/vectorized/OnHeapColumnVector.java   | 22 
 2 files changed, 8 insertions(+), 38 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (383bb4a -> faeb71b)

2020-09-23 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 383bb4a  [SPARK-32892][CORE][SQL] Fix hash functions on big-endian 
platforms
 add faeb71b  [SPARK-32950][SQL] Remove unnecessary big-endian code paths

No new revisions were added by this update.

Summary of changes:
 .../execution/vectorized/OffHeapColumnVector.java  | 24 --
 .../execution/vectorized/OnHeapColumnVector.java   | 22 
 2 files changed, 8 insertions(+), 38 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (432afac -> 383bb4a)

2020-09-23 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 432afac  [SPARK-32907][ML] adaptively blockify instances - revert 
blockify gmm
 add 383bb4a  [SPARK-32892][CORE][SQL] Fix hash functions on big-endian 
platforms

No new revisions were added by this update.

Summary of changes:
 .../apache/spark/util/sketch/Murmur3_x86_32.java   | 10 ++-
 .../apache/spark/unsafe/hash/Murmur3_x86_32.java   | 10 ++-
 .../spark/sql/catalyst/expressions/XXH64.java  | 43 ++
 .../spark/sql/catalyst/expressions/XXH64Suite.java | 91 +-
 4 files changed, 98 insertions(+), 56 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (383bb4a -> faeb71b)

2020-09-23 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 383bb4a  [SPARK-32892][CORE][SQL] Fix hash functions on big-endian 
platforms
 add faeb71b  [SPARK-32950][SQL] Remove unnecessary big-endian code paths

No new revisions were added by this update.

Summary of changes:
 .../execution/vectorized/OffHeapColumnVector.java  | 24 --
 .../execution/vectorized/OnHeapColumnVector.java   | 22 
 2 files changed, 8 insertions(+), 38 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (432afac -> 383bb4a)

2020-09-23 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 432afac  [SPARK-32907][ML] adaptively blockify instances - revert 
blockify gmm
 add 383bb4a  [SPARK-32892][CORE][SQL] Fix hash functions on big-endian 
platforms

No new revisions were added by this update.

Summary of changes:
 .../apache/spark/util/sketch/Murmur3_x86_32.java   | 10 ++-
 .../apache/spark/unsafe/hash/Murmur3_x86_32.java   | 10 ++-
 .../spark/sql/catalyst/expressions/XXH64.java  | 43 ++
 .../spark/sql/catalyst/expressions/XXH64Suite.java | 91 +-
 4 files changed, 98 insertions(+), 56 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (383bb4a -> faeb71b)

2020-09-23 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 383bb4a  [SPARK-32892][CORE][SQL] Fix hash functions on big-endian 
platforms
 add faeb71b  [SPARK-32950][SQL] Remove unnecessary big-endian code paths

No new revisions were added by this update.

Summary of changes:
 .../execution/vectorized/OffHeapColumnVector.java  | 24 --
 .../execution/vectorized/OnHeapColumnVector.java   | 22 
 2 files changed, 8 insertions(+), 38 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (432afac -> 383bb4a)

2020-09-23 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 432afac  [SPARK-32907][ML] adaptively blockify instances - revert 
blockify gmm
 add 383bb4a  [SPARK-32892][CORE][SQL] Fix hash functions on big-endian 
platforms

No new revisions were added by this update.

Summary of changes:
 .../apache/spark/util/sketch/Murmur3_x86_32.java   | 10 ++-
 .../apache/spark/unsafe/hash/Murmur3_x86_32.java   | 10 ++-
 .../spark/sql/catalyst/expressions/XXH64.java  | 43 ++
 .../spark/sql/catalyst/expressions/XXH64Suite.java | 91 +-
 4 files changed, 98 insertions(+), 56 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (383bb4a -> faeb71b)

2020-09-23 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 383bb4a  [SPARK-32892][CORE][SQL] Fix hash functions on big-endian 
platforms
 add faeb71b  [SPARK-32950][SQL] Remove unnecessary big-endian code paths

No new revisions were added by this update.

Summary of changes:
 .../execution/vectorized/OffHeapColumnVector.java  | 24 --
 .../execution/vectorized/OnHeapColumnVector.java   | 22 
 2 files changed, 8 insertions(+), 38 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (432afac -> 383bb4a)

2020-09-23 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 432afac  [SPARK-32907][ML] adaptively blockify instances - revert 
blockify gmm
 add 383bb4a  [SPARK-32892][CORE][SQL] Fix hash functions on big-endian 
platforms

No new revisions were added by this update.

Summary of changes:
 .../apache/spark/util/sketch/Murmur3_x86_32.java   | 10 ++-
 .../apache/spark/unsafe/hash/Murmur3_x86_32.java   | 10 ++-
 .../spark/sql/catalyst/expressions/XXH64.java  | 43 ++
 .../spark/sql/catalyst/expressions/XXH64Suite.java | 91 +-
 4 files changed, 98 insertions(+), 56 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (432afac -> 383bb4a)

2020-09-23 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 432afac  [SPARK-32907][ML] adaptively blockify instances - revert 
blockify gmm
 add 383bb4a  [SPARK-32892][CORE][SQL] Fix hash functions on big-endian 
platforms

No new revisions were added by this update.

Summary of changes:
 .../apache/spark/util/sketch/Murmur3_x86_32.java   | 10 ++-
 .../apache/spark/unsafe/hash/Murmur3_x86_32.java   | 10 ++-
 .../spark/sql/catalyst/expressions/XXH64.java  | 43 ++
 .../spark/sql/catalyst/expressions/XXH64Suite.java | 91 +-
 4 files changed, 98 insertions(+), 56 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-2.4 updated: [SPARK-32886][WEBUI] fix 'undefined' link in event timeline view

2020-09-21 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch branch-2.4
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-2.4 by this push:
 new 2516128  [SPARK-32886][WEBUI] fix 'undefined' link in event timeline 
view
2516128 is described below

commit 2516128e466c489effc20e72d06c26a2d2f7faed
Author: Zhen Li 
AuthorDate: Mon Sep 21 09:05:40 2020 -0500

[SPARK-32886][WEBUI] fix 'undefined' link in event timeline view

### What changes were proposed in this pull request?

Fix ".../jobs/undefined" link from "Event Timeline" in jobs page. Job page 
link in "Event Timeline" view is constructed by fetching job page link defined 
in job list below. when job count exceeds page size of job table, only links of 
jobs in job table can be fetched from page. Other jobs' link would be 
'undefined', and links of them in "Event Timeline" are broken, they are 
redirected to some wired URL like ".../jobs/undefined". This PR is fixing this 
wrong link issue. With this PR, jo [...]

### Why are the changes needed?

Wrong link (".../jobs/undefined") in "Event Timeline" of jobs page. for 
example, the first job in below page is not in table below, as job count(116) 
exceeds page size(100). When clicking it's item in "Event Timeline", page is 
redirected to ".../jobs/undefined", which is wrong. Links in "Event Timeline" 
should always be correct.

![undefinedlink](https://user-images.githubusercontent.com/10524738/93184779-83fa6d80-f6f1-11ea-8a80-1a304ca9cbb2.JPG)

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

Manually tested.

Closes #29757 from zhli1142015/fix-link-event-timeline-view.

Authored-by: Zhen Li 
Signed-off-by: Sean Owen 
(cherry picked from commit d01594e8d186e63a6c3ce361e756565e830d5237)
Signed-off-by: Sean Owen 
---
 .../org/apache/spark/ui/static/timeline-view.js| 53 ++
 1 file changed, 33 insertions(+), 20 deletions(-)

diff --git 
a/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js 
b/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js
index 5be8cff..220b76a 100644
--- a/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js
+++ b/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js
@@ -42,26 +42,31 @@ function drawApplicationTimeline(groupArray, eventObjArray, 
startTime, offset) {
   setupZoomable("#application-timeline-zoom-lock", applicationTimeline);
   setupExecutorEventAction();
 
+  function getIdForJobEntry(baseElem) {
+var jobIdText = 
$($(baseElem).find(".application-timeline-content")[0]).text();
+var jobId = jobIdText.match("\\(Job (\\d+)\\)$")[1];
+return jobId;
+  }
+
+  function getSelectorForJobEntry(jobId) {
+return "#job-" + jobId;
+  }
+
   function setupJobEventAction() {
 $(".vis-item.vis-range.job.application-timeline-object").each(function() {
-  var getSelectorForJobEntry = function(baseElem) {
-var jobIdText = 
$($(baseElem).find(".application-timeline-content")[0]).text();
-var jobId = jobIdText.match("\\(Job (\\d+)\\)$")[1];
-   return "#job-" + jobId;
-  };
-
   $(this).click(function() {
-var jobPagePath = 
$(getSelectorForJobEntry(this)).find("a.name-link").attr("href");
-  window.location.href = jobPagePath
+var jobId = getIdForJobEntry(this);
+var jobPagePath = uiRoot + appBasePath + "/jobs/job/?id=" + jobId;
+window.location.href = jobPagePath;
   });
 
   $(this).hover(
 function() {
-  $(getSelectorForJobEntry(this)).addClass("corresponding-item-hover");
+  
$(getSelectorForJobEntry(getIdForJobEntry(this))).addClass("corresponding-item-hover");
   
$($(this).find("div.application-timeline-content")[0]).tooltip("show");
 },
 function() {
-  
$(getSelectorForJobEntry(this)).removeClass("corresponding-item-hover");
+  
$(getSelectorForJobEntry(getIdForJobEntry(this))).removeClass("corresponding-item-hover");
   
$($(this).find("div.application-timeline-content")[0]).tooltip("hide");
 }
   );
@@ -125,26 +130,34 @@ function drawJobTimeline(groupArray, eventObjArray, 
startTime, offset) {
   setupZoomable("#job-timeline-zoom-lock", jobTimeline);
   setupExecutorEventAction();
 
+  function getStageIdAndAttemptForStageEntry(baseElem) {
+var stageIdText = $($(baseElem).find(".job-timeline-content")[0]).text();
+var stageIdAndAttempt = stageId

[spark] branch branch-3.0 updated: [SPARK-32886][WEBUI] fix 'undefined' link in event timeline view

2020-09-21 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new 0a4b668  [SPARK-32886][WEBUI] fix 'undefined' link in event timeline 
view
0a4b668 is described below

commit 0a4b668b62cf198358897a145f7d844614cd1931
Author: Zhen Li 
AuthorDate: Mon Sep 21 09:05:40 2020 -0500

[SPARK-32886][WEBUI] fix 'undefined' link in event timeline view

### What changes were proposed in this pull request?

Fix ".../jobs/undefined" link from "Event Timeline" in jobs page. Job page 
link in "Event Timeline" view is constructed by fetching job page link defined 
in job list below. when job count exceeds page size of job table, only links of 
jobs in job table can be fetched from page. Other jobs' link would be 
'undefined', and links of them in "Event Timeline" are broken, they are 
redirected to some wired URL like ".../jobs/undefined". This PR is fixing this 
wrong link issue. With this PR, jo [...]

### Why are the changes needed?

Wrong link (".../jobs/undefined") in "Event Timeline" of jobs page. for 
example, the first job in below page is not in table below, as job count(116) 
exceeds page size(100). When clicking it's item in "Event Timeline", page is 
redirected to ".../jobs/undefined", which is wrong. Links in "Event Timeline" 
should always be correct.

![undefinedlink](https://user-images.githubusercontent.com/10524738/93184779-83fa6d80-f6f1-11ea-8a80-1a304ca9cbb2.JPG)

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

Manually tested.

Closes #29757 from zhli1142015/fix-link-event-timeline-view.

Authored-by: Zhen Li 
Signed-off-by: Sean Owen 
(cherry picked from commit d01594e8d186e63a6c3ce361e756565e830d5237)
Signed-off-by: Sean Owen 
---
 .../org/apache/spark/ui/static/timeline-view.js| 53 ++
 1 file changed, 33 insertions(+), 20 deletions(-)

diff --git 
a/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js 
b/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js
index 5be8cff..220b76a 100644
--- a/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js
+++ b/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js
@@ -42,26 +42,31 @@ function drawApplicationTimeline(groupArray, eventObjArray, 
startTime, offset) {
   setupZoomable("#application-timeline-zoom-lock", applicationTimeline);
   setupExecutorEventAction();
 
+  function getIdForJobEntry(baseElem) {
+var jobIdText = 
$($(baseElem).find(".application-timeline-content")[0]).text();
+var jobId = jobIdText.match("\\(Job (\\d+)\\)$")[1];
+return jobId;
+  }
+
+  function getSelectorForJobEntry(jobId) {
+return "#job-" + jobId;
+  }
+
   function setupJobEventAction() {
 $(".vis-item.vis-range.job.application-timeline-object").each(function() {
-  var getSelectorForJobEntry = function(baseElem) {
-var jobIdText = 
$($(baseElem).find(".application-timeline-content")[0]).text();
-var jobId = jobIdText.match("\\(Job (\\d+)\\)$")[1];
-   return "#job-" + jobId;
-  };
-
   $(this).click(function() {
-var jobPagePath = 
$(getSelectorForJobEntry(this)).find("a.name-link").attr("href");
-  window.location.href = jobPagePath
+var jobId = getIdForJobEntry(this);
+var jobPagePath = uiRoot + appBasePath + "/jobs/job/?id=" + jobId;
+window.location.href = jobPagePath;
   });
 
   $(this).hover(
 function() {
-  $(getSelectorForJobEntry(this)).addClass("corresponding-item-hover");
+  
$(getSelectorForJobEntry(getIdForJobEntry(this))).addClass("corresponding-item-hover");
   
$($(this).find("div.application-timeline-content")[0]).tooltip("show");
 },
 function() {
-  
$(getSelectorForJobEntry(this)).removeClass("corresponding-item-hover");
+  
$(getSelectorForJobEntry(getIdForJobEntry(this))).removeClass("corresponding-item-hover");
   
$($(this).find("div.application-timeline-content")[0]).tooltip("hide");
 }
   );
@@ -125,26 +130,34 @@ function drawJobTimeline(groupArray, eventObjArray, 
startTime, offset) {
   setupZoomable("#job-timeline-zoom-lock", jobTimeline);
   setupExecutorEventAction();
 
+  function getStageIdAndAttemptForStageEntry(baseElem) {
+var stageIdText = $($(baseElem).find(".job-timeline-content")[0]).text();
+var stageIdAndAttempt = stageId

[spark] branch branch-2.4 updated: [SPARK-32886][WEBUI] fix 'undefined' link in event timeline view

2020-09-21 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch branch-2.4
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-2.4 by this push:
 new 2516128  [SPARK-32886][WEBUI] fix 'undefined' link in event timeline 
view
2516128 is described below

commit 2516128e466c489effc20e72d06c26a2d2f7faed
Author: Zhen Li 
AuthorDate: Mon Sep 21 09:05:40 2020 -0500

[SPARK-32886][WEBUI] fix 'undefined' link in event timeline view

### What changes were proposed in this pull request?

Fix ".../jobs/undefined" link from "Event Timeline" in jobs page. Job page 
link in "Event Timeline" view is constructed by fetching job page link defined 
in job list below. when job count exceeds page size of job table, only links of 
jobs in job table can be fetched from page. Other jobs' link would be 
'undefined', and links of them in "Event Timeline" are broken, they are 
redirected to some wired URL like ".../jobs/undefined". This PR is fixing this 
wrong link issue. With this PR, jo [...]

### Why are the changes needed?

Wrong link (".../jobs/undefined") in "Event Timeline" of jobs page. for 
example, the first job in below page is not in table below, as job count(116) 
exceeds page size(100). When clicking it's item in "Event Timeline", page is 
redirected to ".../jobs/undefined", which is wrong. Links in "Event Timeline" 
should always be correct.

![undefinedlink](https://user-images.githubusercontent.com/10524738/93184779-83fa6d80-f6f1-11ea-8a80-1a304ca9cbb2.JPG)

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

Manually tested.

Closes #29757 from zhli1142015/fix-link-event-timeline-view.

Authored-by: Zhen Li 
Signed-off-by: Sean Owen 
(cherry picked from commit d01594e8d186e63a6c3ce361e756565e830d5237)
Signed-off-by: Sean Owen 
---
 .../org/apache/spark/ui/static/timeline-view.js| 53 ++
 1 file changed, 33 insertions(+), 20 deletions(-)

diff --git 
a/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js 
b/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js
index 5be8cff..220b76a 100644
--- a/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js
+++ b/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js
@@ -42,26 +42,31 @@ function drawApplicationTimeline(groupArray, eventObjArray, 
startTime, offset) {
   setupZoomable("#application-timeline-zoom-lock", applicationTimeline);
   setupExecutorEventAction();
 
+  function getIdForJobEntry(baseElem) {
+var jobIdText = 
$($(baseElem).find(".application-timeline-content")[0]).text();
+var jobId = jobIdText.match("\\(Job (\\d+)\\)$")[1];
+return jobId;
+  }
+
+  function getSelectorForJobEntry(jobId) {
+return "#job-" + jobId;
+  }
+
   function setupJobEventAction() {
 $(".vis-item.vis-range.job.application-timeline-object").each(function() {
-  var getSelectorForJobEntry = function(baseElem) {
-var jobIdText = 
$($(baseElem).find(".application-timeline-content")[0]).text();
-var jobId = jobIdText.match("\\(Job (\\d+)\\)$")[1];
-   return "#job-" + jobId;
-  };
-
   $(this).click(function() {
-var jobPagePath = 
$(getSelectorForJobEntry(this)).find("a.name-link").attr("href");
-  window.location.href = jobPagePath
+var jobId = getIdForJobEntry(this);
+var jobPagePath = uiRoot + appBasePath + "/jobs/job/?id=" + jobId;
+window.location.href = jobPagePath;
   });
 
   $(this).hover(
 function() {
-  $(getSelectorForJobEntry(this)).addClass("corresponding-item-hover");
+  
$(getSelectorForJobEntry(getIdForJobEntry(this))).addClass("corresponding-item-hover");
   
$($(this).find("div.application-timeline-content")[0]).tooltip("show");
 },
 function() {
-  
$(getSelectorForJobEntry(this)).removeClass("corresponding-item-hover");
+  
$(getSelectorForJobEntry(getIdForJobEntry(this))).removeClass("corresponding-item-hover");
   
$($(this).find("div.application-timeline-content")[0]).tooltip("hide");
 }
   );
@@ -125,26 +130,34 @@ function drawJobTimeline(groupArray, eventObjArray, 
startTime, offset) {
   setupZoomable("#job-timeline-zoom-lock", jobTimeline);
   setupExecutorEventAction();
 
+  function getStageIdAndAttemptForStageEntry(baseElem) {
+var stageIdText = $($(baseElem).find(".job-timeline-content")[0]).text();
+var stageIdAndAttempt = stageId

[spark] branch branch-3.0 updated: [SPARK-32886][WEBUI] fix 'undefined' link in event timeline view

2020-09-21 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new 0a4b668  [SPARK-32886][WEBUI] fix 'undefined' link in event timeline 
view
0a4b668 is described below

commit 0a4b668b62cf198358897a145f7d844614cd1931
Author: Zhen Li 
AuthorDate: Mon Sep 21 09:05:40 2020 -0500

[SPARK-32886][WEBUI] fix 'undefined' link in event timeline view

### What changes were proposed in this pull request?

Fix ".../jobs/undefined" link from "Event Timeline" in jobs page. Job page 
link in "Event Timeline" view is constructed by fetching job page link defined 
in job list below. when job count exceeds page size of job table, only links of 
jobs in job table can be fetched from page. Other jobs' link would be 
'undefined', and links of them in "Event Timeline" are broken, they are 
redirected to some wired URL like ".../jobs/undefined". This PR is fixing this 
wrong link issue. With this PR, jo [...]

### Why are the changes needed?

Wrong link (".../jobs/undefined") in "Event Timeline" of jobs page. for 
example, the first job in below page is not in table below, as job count(116) 
exceeds page size(100). When clicking it's item in "Event Timeline", page is 
redirected to ".../jobs/undefined", which is wrong. Links in "Event Timeline" 
should always be correct.

![undefinedlink](https://user-images.githubusercontent.com/10524738/93184779-83fa6d80-f6f1-11ea-8a80-1a304ca9cbb2.JPG)

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

Manually tested.

Closes #29757 from zhli1142015/fix-link-event-timeline-view.

Authored-by: Zhen Li 
Signed-off-by: Sean Owen 
(cherry picked from commit d01594e8d186e63a6c3ce361e756565e830d5237)
Signed-off-by: Sean Owen 
---
 .../org/apache/spark/ui/static/timeline-view.js| 53 ++
 1 file changed, 33 insertions(+), 20 deletions(-)

diff --git 
a/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js 
b/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js
index 5be8cff..220b76a 100644
--- a/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js
+++ b/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js
@@ -42,26 +42,31 @@ function drawApplicationTimeline(groupArray, eventObjArray, 
startTime, offset) {
   setupZoomable("#application-timeline-zoom-lock", applicationTimeline);
   setupExecutorEventAction();
 
+  function getIdForJobEntry(baseElem) {
+var jobIdText = 
$($(baseElem).find(".application-timeline-content")[0]).text();
+var jobId = jobIdText.match("\\(Job (\\d+)\\)$")[1];
+return jobId;
+  }
+
+  function getSelectorForJobEntry(jobId) {
+return "#job-" + jobId;
+  }
+
   function setupJobEventAction() {
 $(".vis-item.vis-range.job.application-timeline-object").each(function() {
-  var getSelectorForJobEntry = function(baseElem) {
-var jobIdText = 
$($(baseElem).find(".application-timeline-content")[0]).text();
-var jobId = jobIdText.match("\\(Job (\\d+)\\)$")[1];
-   return "#job-" + jobId;
-  };
-
   $(this).click(function() {
-var jobPagePath = 
$(getSelectorForJobEntry(this)).find("a.name-link").attr("href");
-  window.location.href = jobPagePath
+var jobId = getIdForJobEntry(this);
+var jobPagePath = uiRoot + appBasePath + "/jobs/job/?id=" + jobId;
+window.location.href = jobPagePath;
   });
 
   $(this).hover(
 function() {
-  $(getSelectorForJobEntry(this)).addClass("corresponding-item-hover");
+  
$(getSelectorForJobEntry(getIdForJobEntry(this))).addClass("corresponding-item-hover");
   
$($(this).find("div.application-timeline-content")[0]).tooltip("show");
 },
 function() {
-  
$(getSelectorForJobEntry(this)).removeClass("corresponding-item-hover");
+  
$(getSelectorForJobEntry(getIdForJobEntry(this))).removeClass("corresponding-item-hover");
   
$($(this).find("div.application-timeline-content")[0]).tooltip("hide");
 }
   );
@@ -125,26 +130,34 @@ function drawJobTimeline(groupArray, eventObjArray, 
startTime, offset) {
   setupZoomable("#job-timeline-zoom-lock", jobTimeline);
   setupExecutorEventAction();
 
+  function getStageIdAndAttemptForStageEntry(baseElem) {
+var stageIdText = $($(baseElem).find(".job-timeline-content")[0]).text();
+var stageIdAndAttempt = stageId

[spark] branch branch-3.0 updated: [SPARK-32886][WEBUI] fix 'undefined' link in event timeline view

2020-09-21 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new 0a4b668  [SPARK-32886][WEBUI] fix 'undefined' link in event timeline 
view
0a4b668 is described below

commit 0a4b668b62cf198358897a145f7d844614cd1931
Author: Zhen Li 
AuthorDate: Mon Sep 21 09:05:40 2020 -0500

[SPARK-32886][WEBUI] fix 'undefined' link in event timeline view

### What changes were proposed in this pull request?

Fix ".../jobs/undefined" link from "Event Timeline" in jobs page. Job page 
link in "Event Timeline" view is constructed by fetching job page link defined 
in job list below. when job count exceeds page size of job table, only links of 
jobs in job table can be fetched from page. Other jobs' link would be 
'undefined', and links of them in "Event Timeline" are broken, they are 
redirected to some wired URL like ".../jobs/undefined". This PR is fixing this 
wrong link issue. With this PR, jo [...]

### Why are the changes needed?

Wrong link (".../jobs/undefined") in "Event Timeline" of jobs page. for 
example, the first job in below page is not in table below, as job count(116) 
exceeds page size(100). When clicking it's item in "Event Timeline", page is 
redirected to ".../jobs/undefined", which is wrong. Links in "Event Timeline" 
should always be correct.

![undefinedlink](https://user-images.githubusercontent.com/10524738/93184779-83fa6d80-f6f1-11ea-8a80-1a304ca9cbb2.JPG)

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

Manually tested.

Closes #29757 from zhli1142015/fix-link-event-timeline-view.

Authored-by: Zhen Li 
Signed-off-by: Sean Owen 
(cherry picked from commit d01594e8d186e63a6c3ce361e756565e830d5237)
Signed-off-by: Sean Owen 
---
 .../org/apache/spark/ui/static/timeline-view.js| 53 ++
 1 file changed, 33 insertions(+), 20 deletions(-)

diff --git 
a/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js 
b/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js
index 5be8cff..220b76a 100644
--- a/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js
+++ b/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js
@@ -42,26 +42,31 @@ function drawApplicationTimeline(groupArray, eventObjArray, 
startTime, offset) {
   setupZoomable("#application-timeline-zoom-lock", applicationTimeline);
   setupExecutorEventAction();
 
+  function getIdForJobEntry(baseElem) {
+var jobIdText = 
$($(baseElem).find(".application-timeline-content")[0]).text();
+var jobId = jobIdText.match("\\(Job (\\d+)\\)$")[1];
+return jobId;
+  }
+
+  function getSelectorForJobEntry(jobId) {
+return "#job-" + jobId;
+  }
+
   function setupJobEventAction() {
 $(".vis-item.vis-range.job.application-timeline-object").each(function() {
-  var getSelectorForJobEntry = function(baseElem) {
-var jobIdText = 
$($(baseElem).find(".application-timeline-content")[0]).text();
-var jobId = jobIdText.match("\\(Job (\\d+)\\)$")[1];
-   return "#job-" + jobId;
-  };
-
   $(this).click(function() {
-var jobPagePath = 
$(getSelectorForJobEntry(this)).find("a.name-link").attr("href");
-  window.location.href = jobPagePath
+var jobId = getIdForJobEntry(this);
+var jobPagePath = uiRoot + appBasePath + "/jobs/job/?id=" + jobId;
+window.location.href = jobPagePath;
   });
 
   $(this).hover(
 function() {
-  $(getSelectorForJobEntry(this)).addClass("corresponding-item-hover");
+  
$(getSelectorForJobEntry(getIdForJobEntry(this))).addClass("corresponding-item-hover");
   
$($(this).find("div.application-timeline-content")[0]).tooltip("show");
 },
 function() {
-  
$(getSelectorForJobEntry(this)).removeClass("corresponding-item-hover");
+  
$(getSelectorForJobEntry(getIdForJobEntry(this))).removeClass("corresponding-item-hover");
   
$($(this).find("div.application-timeline-content")[0]).tooltip("hide");
 }
   );
@@ -125,26 +130,34 @@ function drawJobTimeline(groupArray, eventObjArray, 
startTime, offset) {
   setupZoomable("#job-timeline-zoom-lock", jobTimeline);
   setupExecutorEventAction();
 
+  function getStageIdAndAttemptForStageEntry(baseElem) {
+var stageIdText = $($(baseElem).find(".job-timeline-content")[0]).text();
+var stageIdAndAttempt = stageId

[spark] branch master updated (c336ddf -> d01594e)

2020-09-21 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from c336ddf  [SPARK-32867][SQL] When explain, HiveTableRelation show 
limited message
 add d01594e  [SPARK-32886][WEBUI] fix 'undefined' link in event timeline 
view

No new revisions were added by this update.

Summary of changes:
 .../org/apache/spark/ui/static/timeline-view.js| 53 ++
 1 file changed, 33 insertions(+), 20 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-2.4 updated: [SPARK-32886][WEBUI] fix 'undefined' link in event timeline view

2020-09-21 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch branch-2.4
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-2.4 by this push:
 new 2516128  [SPARK-32886][WEBUI] fix 'undefined' link in event timeline 
view
2516128 is described below

commit 2516128e466c489effc20e72d06c26a2d2f7faed
Author: Zhen Li 
AuthorDate: Mon Sep 21 09:05:40 2020 -0500

[SPARK-32886][WEBUI] fix 'undefined' link in event timeline view

### What changes were proposed in this pull request?

Fix ".../jobs/undefined" link from "Event Timeline" in jobs page. Job page 
link in "Event Timeline" view is constructed by fetching job page link defined 
in job list below. when job count exceeds page size of job table, only links of 
jobs in job table can be fetched from page. Other jobs' link would be 
'undefined', and links of them in "Event Timeline" are broken, they are 
redirected to some wired URL like ".../jobs/undefined". This PR is fixing this 
wrong link issue. With this PR, jo [...]

### Why are the changes needed?

Wrong link (".../jobs/undefined") in "Event Timeline" of jobs page. for 
example, the first job in below page is not in table below, as job count(116) 
exceeds page size(100). When clicking it's item in "Event Timeline", page is 
redirected to ".../jobs/undefined", which is wrong. Links in "Event Timeline" 
should always be correct.

![undefinedlink](https://user-images.githubusercontent.com/10524738/93184779-83fa6d80-f6f1-11ea-8a80-1a304ca9cbb2.JPG)

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

Manually tested.

Closes #29757 from zhli1142015/fix-link-event-timeline-view.

Authored-by: Zhen Li 
Signed-off-by: Sean Owen 
(cherry picked from commit d01594e8d186e63a6c3ce361e756565e830d5237)
Signed-off-by: Sean Owen 
---
 .../org/apache/spark/ui/static/timeline-view.js| 53 ++
 1 file changed, 33 insertions(+), 20 deletions(-)

diff --git 
a/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js 
b/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js
index 5be8cff..220b76a 100644
--- a/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js
+++ b/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js
@@ -42,26 +42,31 @@ function drawApplicationTimeline(groupArray, eventObjArray, 
startTime, offset) {
   setupZoomable("#application-timeline-zoom-lock", applicationTimeline);
   setupExecutorEventAction();
 
+  function getIdForJobEntry(baseElem) {
+var jobIdText = 
$($(baseElem).find(".application-timeline-content")[0]).text();
+var jobId = jobIdText.match("\\(Job (\\d+)\\)$")[1];
+return jobId;
+  }
+
+  function getSelectorForJobEntry(jobId) {
+return "#job-" + jobId;
+  }
+
   function setupJobEventAction() {
 $(".vis-item.vis-range.job.application-timeline-object").each(function() {
-  var getSelectorForJobEntry = function(baseElem) {
-var jobIdText = 
$($(baseElem).find(".application-timeline-content")[0]).text();
-var jobId = jobIdText.match("\\(Job (\\d+)\\)$")[1];
-   return "#job-" + jobId;
-  };
-
   $(this).click(function() {
-var jobPagePath = 
$(getSelectorForJobEntry(this)).find("a.name-link").attr("href");
-  window.location.href = jobPagePath
+var jobId = getIdForJobEntry(this);
+var jobPagePath = uiRoot + appBasePath + "/jobs/job/?id=" + jobId;
+window.location.href = jobPagePath;
   });
 
   $(this).hover(
 function() {
-  $(getSelectorForJobEntry(this)).addClass("corresponding-item-hover");
+  
$(getSelectorForJobEntry(getIdForJobEntry(this))).addClass("corresponding-item-hover");
   
$($(this).find("div.application-timeline-content")[0]).tooltip("show");
 },
 function() {
-  
$(getSelectorForJobEntry(this)).removeClass("corresponding-item-hover");
+  
$(getSelectorForJobEntry(getIdForJobEntry(this))).removeClass("corresponding-item-hover");
   
$($(this).find("div.application-timeline-content")[0]).tooltip("hide");
 }
   );
@@ -125,26 +130,34 @@ function drawJobTimeline(groupArray, eventObjArray, 
startTime, offset) {
   setupZoomable("#job-timeline-zoom-lock", jobTimeline);
   setupExecutorEventAction();
 
+  function getStageIdAndAttemptForStageEntry(baseElem) {
+var stageIdText = $($(baseElem).find(".job-timeline-content")[0]).text();
+var stageIdAndAttempt = stageId

[spark] branch master updated (c336ddf -> d01594e)

2020-09-21 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from c336ddf  [SPARK-32867][SQL] When explain, HiveTableRelation show 
limited message
 add d01594e  [SPARK-32886][WEBUI] fix 'undefined' link in event timeline 
view

No new revisions were added by this update.

Summary of changes:
 .../org/apache/spark/ui/static/timeline-view.js| 53 ++
 1 file changed, 33 insertions(+), 20 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-2.4 updated: [SPARK-32886][WEBUI] fix 'undefined' link in event timeline view

2020-09-21 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch branch-2.4
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-2.4 by this push:
 new 2516128  [SPARK-32886][WEBUI] fix 'undefined' link in event timeline 
view
2516128 is described below

commit 2516128e466c489effc20e72d06c26a2d2f7faed
Author: Zhen Li 
AuthorDate: Mon Sep 21 09:05:40 2020 -0500

[SPARK-32886][WEBUI] fix 'undefined' link in event timeline view

### What changes were proposed in this pull request?

Fix ".../jobs/undefined" link from "Event Timeline" in jobs page. Job page 
link in "Event Timeline" view is constructed by fetching job page link defined 
in job list below. when job count exceeds page size of job table, only links of 
jobs in job table can be fetched from page. Other jobs' link would be 
'undefined', and links of them in "Event Timeline" are broken, they are 
redirected to some wired URL like ".../jobs/undefined". This PR is fixing this 
wrong link issue. With this PR, jo [...]

### Why are the changes needed?

Wrong link (".../jobs/undefined") in "Event Timeline" of jobs page. for 
example, the first job in below page is not in table below, as job count(116) 
exceeds page size(100). When clicking it's item in "Event Timeline", page is 
redirected to ".../jobs/undefined", which is wrong. Links in "Event Timeline" 
should always be correct.

![undefinedlink](https://user-images.githubusercontent.com/10524738/93184779-83fa6d80-f6f1-11ea-8a80-1a304ca9cbb2.JPG)

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

Manually tested.

Closes #29757 from zhli1142015/fix-link-event-timeline-view.

Authored-by: Zhen Li 
Signed-off-by: Sean Owen 
(cherry picked from commit d01594e8d186e63a6c3ce361e756565e830d5237)
Signed-off-by: Sean Owen 
---
 .../org/apache/spark/ui/static/timeline-view.js| 53 ++
 1 file changed, 33 insertions(+), 20 deletions(-)

diff --git 
a/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js 
b/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js
index 5be8cff..220b76a 100644
--- a/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js
+++ b/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js
@@ -42,26 +42,31 @@ function drawApplicationTimeline(groupArray, eventObjArray, 
startTime, offset) {
   setupZoomable("#application-timeline-zoom-lock", applicationTimeline);
   setupExecutorEventAction();
 
+  function getIdForJobEntry(baseElem) {
+var jobIdText = 
$($(baseElem).find(".application-timeline-content")[0]).text();
+var jobId = jobIdText.match("\\(Job (\\d+)\\)$")[1];
+return jobId;
+  }
+
+  function getSelectorForJobEntry(jobId) {
+return "#job-" + jobId;
+  }
+
   function setupJobEventAction() {
 $(".vis-item.vis-range.job.application-timeline-object").each(function() {
-  var getSelectorForJobEntry = function(baseElem) {
-var jobIdText = 
$($(baseElem).find(".application-timeline-content")[0]).text();
-var jobId = jobIdText.match("\\(Job (\\d+)\\)$")[1];
-   return "#job-" + jobId;
-  };
-
   $(this).click(function() {
-var jobPagePath = 
$(getSelectorForJobEntry(this)).find("a.name-link").attr("href");
-  window.location.href = jobPagePath
+var jobId = getIdForJobEntry(this);
+var jobPagePath = uiRoot + appBasePath + "/jobs/job/?id=" + jobId;
+window.location.href = jobPagePath;
   });
 
   $(this).hover(
 function() {
-  $(getSelectorForJobEntry(this)).addClass("corresponding-item-hover");
+  
$(getSelectorForJobEntry(getIdForJobEntry(this))).addClass("corresponding-item-hover");
   
$($(this).find("div.application-timeline-content")[0]).tooltip("show");
 },
 function() {
-  
$(getSelectorForJobEntry(this)).removeClass("corresponding-item-hover");
+  
$(getSelectorForJobEntry(getIdForJobEntry(this))).removeClass("corresponding-item-hover");
   
$($(this).find("div.application-timeline-content")[0]).tooltip("hide");
 }
   );
@@ -125,26 +130,34 @@ function drawJobTimeline(groupArray, eventObjArray, 
startTime, offset) {
   setupZoomable("#job-timeline-zoom-lock", jobTimeline);
   setupExecutorEventAction();
 
+  function getStageIdAndAttemptForStageEntry(baseElem) {
+var stageIdText = $($(baseElem).find(".job-timeline-content")[0]).text();
+var stageIdAndAttempt = stageId

[spark] branch branch-3.0 updated: [SPARK-32886][WEBUI] fix 'undefined' link in event timeline view

2020-09-21 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new 0a4b668  [SPARK-32886][WEBUI] fix 'undefined' link in event timeline 
view
0a4b668 is described below

commit 0a4b668b62cf198358897a145f7d844614cd1931
Author: Zhen Li 
AuthorDate: Mon Sep 21 09:05:40 2020 -0500

[SPARK-32886][WEBUI] fix 'undefined' link in event timeline view

### What changes were proposed in this pull request?

Fix ".../jobs/undefined" link from "Event Timeline" in jobs page. Job page 
link in "Event Timeline" view is constructed by fetching job page link defined 
in job list below. when job count exceeds page size of job table, only links of 
jobs in job table can be fetched from page. Other jobs' link would be 
'undefined', and links of them in "Event Timeline" are broken, they are 
redirected to some wired URL like ".../jobs/undefined". This PR is fixing this 
wrong link issue. With this PR, jo [...]

### Why are the changes needed?

Wrong link (".../jobs/undefined") in "Event Timeline" of jobs page. for 
example, the first job in below page is not in table below, as job count(116) 
exceeds page size(100). When clicking it's item in "Event Timeline", page is 
redirected to ".../jobs/undefined", which is wrong. Links in "Event Timeline" 
should always be correct.

![undefinedlink](https://user-images.githubusercontent.com/10524738/93184779-83fa6d80-f6f1-11ea-8a80-1a304ca9cbb2.JPG)

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

Manually tested.

Closes #29757 from zhli1142015/fix-link-event-timeline-view.

Authored-by: Zhen Li 
Signed-off-by: Sean Owen 
(cherry picked from commit d01594e8d186e63a6c3ce361e756565e830d5237)
Signed-off-by: Sean Owen 
---
 .../org/apache/spark/ui/static/timeline-view.js| 53 ++
 1 file changed, 33 insertions(+), 20 deletions(-)

diff --git 
a/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js 
b/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js
index 5be8cff..220b76a 100644
--- a/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js
+++ b/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js
@@ -42,26 +42,31 @@ function drawApplicationTimeline(groupArray, eventObjArray, 
startTime, offset) {
   setupZoomable("#application-timeline-zoom-lock", applicationTimeline);
   setupExecutorEventAction();
 
+  function getIdForJobEntry(baseElem) {
+var jobIdText = 
$($(baseElem).find(".application-timeline-content")[0]).text();
+var jobId = jobIdText.match("\\(Job (\\d+)\\)$")[1];
+return jobId;
+  }
+
+  function getSelectorForJobEntry(jobId) {
+return "#job-" + jobId;
+  }
+
   function setupJobEventAction() {
 $(".vis-item.vis-range.job.application-timeline-object").each(function() {
-  var getSelectorForJobEntry = function(baseElem) {
-var jobIdText = 
$($(baseElem).find(".application-timeline-content")[0]).text();
-var jobId = jobIdText.match("\\(Job (\\d+)\\)$")[1];
-   return "#job-" + jobId;
-  };
-
   $(this).click(function() {
-var jobPagePath = 
$(getSelectorForJobEntry(this)).find("a.name-link").attr("href");
-  window.location.href = jobPagePath
+var jobId = getIdForJobEntry(this);
+var jobPagePath = uiRoot + appBasePath + "/jobs/job/?id=" + jobId;
+window.location.href = jobPagePath;
   });
 
   $(this).hover(
 function() {
-  $(getSelectorForJobEntry(this)).addClass("corresponding-item-hover");
+  
$(getSelectorForJobEntry(getIdForJobEntry(this))).addClass("corresponding-item-hover");
   
$($(this).find("div.application-timeline-content")[0]).tooltip("show");
 },
 function() {
-  
$(getSelectorForJobEntry(this)).removeClass("corresponding-item-hover");
+  
$(getSelectorForJobEntry(getIdForJobEntry(this))).removeClass("corresponding-item-hover");
   
$($(this).find("div.application-timeline-content")[0]).tooltip("hide");
 }
   );
@@ -125,26 +130,34 @@ function drawJobTimeline(groupArray, eventObjArray, 
startTime, offset) {
   setupZoomable("#job-timeline-zoom-lock", jobTimeline);
   setupExecutorEventAction();
 
+  function getStageIdAndAttemptForStageEntry(baseElem) {
+var stageIdText = $($(baseElem).find(".job-timeline-content")[0]).text();
+var stageIdAndAttempt = stageId

[spark] branch master updated (c336ddf -> d01594e)

2020-09-21 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from c336ddf  [SPARK-32867][SQL] When explain, HiveTableRelation show 
limited message
 add d01594e  [SPARK-32886][WEBUI] fix 'undefined' link in event timeline 
view

No new revisions were added by this update.

Summary of changes:
 .../org/apache/spark/ui/static/timeline-view.js| 53 ++
 1 file changed, 33 insertions(+), 20 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-2.4 updated: [SPARK-32886][WEBUI] fix 'undefined' link in event timeline view

2020-09-21 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch branch-2.4
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-2.4 by this push:
 new 2516128  [SPARK-32886][WEBUI] fix 'undefined' link in event timeline 
view
2516128 is described below

commit 2516128e466c489effc20e72d06c26a2d2f7faed
Author: Zhen Li 
AuthorDate: Mon Sep 21 09:05:40 2020 -0500

[SPARK-32886][WEBUI] fix 'undefined' link in event timeline view

### What changes were proposed in this pull request?

Fix ".../jobs/undefined" link from "Event Timeline" in jobs page. Job page 
link in "Event Timeline" view is constructed by fetching job page link defined 
in job list below. when job count exceeds page size of job table, only links of 
jobs in job table can be fetched from page. Other jobs' link would be 
'undefined', and links of them in "Event Timeline" are broken, they are 
redirected to some wired URL like ".../jobs/undefined". This PR is fixing this 
wrong link issue. With this PR, jo [...]

### Why are the changes needed?

Wrong link (".../jobs/undefined") in "Event Timeline" of jobs page. for 
example, the first job in below page is not in table below, as job count(116) 
exceeds page size(100). When clicking it's item in "Event Timeline", page is 
redirected to ".../jobs/undefined", which is wrong. Links in "Event Timeline" 
should always be correct.

![undefinedlink](https://user-images.githubusercontent.com/10524738/93184779-83fa6d80-f6f1-11ea-8a80-1a304ca9cbb2.JPG)

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

Manually tested.

Closes #29757 from zhli1142015/fix-link-event-timeline-view.

Authored-by: Zhen Li 
Signed-off-by: Sean Owen 
(cherry picked from commit d01594e8d186e63a6c3ce361e756565e830d5237)
Signed-off-by: Sean Owen 
---
 .../org/apache/spark/ui/static/timeline-view.js| 53 ++
 1 file changed, 33 insertions(+), 20 deletions(-)

diff --git 
a/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js 
b/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js
index 5be8cff..220b76a 100644
--- a/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js
+++ b/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js
@@ -42,26 +42,31 @@ function drawApplicationTimeline(groupArray, eventObjArray, 
startTime, offset) {
   setupZoomable("#application-timeline-zoom-lock", applicationTimeline);
   setupExecutorEventAction();
 
+  function getIdForJobEntry(baseElem) {
+var jobIdText = 
$($(baseElem).find(".application-timeline-content")[0]).text();
+var jobId = jobIdText.match("\\(Job (\\d+)\\)$")[1];
+return jobId;
+  }
+
+  function getSelectorForJobEntry(jobId) {
+return "#job-" + jobId;
+  }
+
   function setupJobEventAction() {
 $(".vis-item.vis-range.job.application-timeline-object").each(function() {
-  var getSelectorForJobEntry = function(baseElem) {
-var jobIdText = 
$($(baseElem).find(".application-timeline-content")[0]).text();
-var jobId = jobIdText.match("\\(Job (\\d+)\\)$")[1];
-   return "#job-" + jobId;
-  };
-
   $(this).click(function() {
-var jobPagePath = 
$(getSelectorForJobEntry(this)).find("a.name-link").attr("href");
-  window.location.href = jobPagePath
+var jobId = getIdForJobEntry(this);
+var jobPagePath = uiRoot + appBasePath + "/jobs/job/?id=" + jobId;
+window.location.href = jobPagePath;
   });
 
   $(this).hover(
 function() {
-  $(getSelectorForJobEntry(this)).addClass("corresponding-item-hover");
+  
$(getSelectorForJobEntry(getIdForJobEntry(this))).addClass("corresponding-item-hover");
   
$($(this).find("div.application-timeline-content")[0]).tooltip("show");
 },
 function() {
-  
$(getSelectorForJobEntry(this)).removeClass("corresponding-item-hover");
+  
$(getSelectorForJobEntry(getIdForJobEntry(this))).removeClass("corresponding-item-hover");
   
$($(this).find("div.application-timeline-content")[0]).tooltip("hide");
 }
   );
@@ -125,26 +130,34 @@ function drawJobTimeline(groupArray, eventObjArray, 
startTime, offset) {
   setupZoomable("#job-timeline-zoom-lock", jobTimeline);
   setupExecutorEventAction();
 
+  function getStageIdAndAttemptForStageEntry(baseElem) {
+var stageIdText = $($(baseElem).find(".job-timeline-content")[0]).text();
+var stageIdAndAttempt = stageId

[spark] branch branch-3.0 updated: [SPARK-32886][WEBUI] fix 'undefined' link in event timeline view

2020-09-21 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new 0a4b668  [SPARK-32886][WEBUI] fix 'undefined' link in event timeline 
view
0a4b668 is described below

commit 0a4b668b62cf198358897a145f7d844614cd1931
Author: Zhen Li 
AuthorDate: Mon Sep 21 09:05:40 2020 -0500

[SPARK-32886][WEBUI] fix 'undefined' link in event timeline view

### What changes were proposed in this pull request?

Fix ".../jobs/undefined" link from "Event Timeline" in jobs page. Job page 
link in "Event Timeline" view is constructed by fetching job page link defined 
in job list below. when job count exceeds page size of job table, only links of 
jobs in job table can be fetched from page. Other jobs' link would be 
'undefined', and links of them in "Event Timeline" are broken, they are 
redirected to some wired URL like ".../jobs/undefined". This PR is fixing this 
wrong link issue. With this PR, jo [...]

### Why are the changes needed?

Wrong link (".../jobs/undefined") in "Event Timeline" of jobs page. for 
example, the first job in below page is not in table below, as job count(116) 
exceeds page size(100). When clicking it's item in "Event Timeline", page is 
redirected to ".../jobs/undefined", which is wrong. Links in "Event Timeline" 
should always be correct.

![undefinedlink](https://user-images.githubusercontent.com/10524738/93184779-83fa6d80-f6f1-11ea-8a80-1a304ca9cbb2.JPG)

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

Manually tested.

Closes #29757 from zhli1142015/fix-link-event-timeline-view.

Authored-by: Zhen Li 
Signed-off-by: Sean Owen 
(cherry picked from commit d01594e8d186e63a6c3ce361e756565e830d5237)
Signed-off-by: Sean Owen 
---
 .../org/apache/spark/ui/static/timeline-view.js| 53 ++
 1 file changed, 33 insertions(+), 20 deletions(-)

diff --git 
a/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js 
b/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js
index 5be8cff..220b76a 100644
--- a/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js
+++ b/core/src/main/resources/org/apache/spark/ui/static/timeline-view.js
@@ -42,26 +42,31 @@ function drawApplicationTimeline(groupArray, eventObjArray, 
startTime, offset) {
   setupZoomable("#application-timeline-zoom-lock", applicationTimeline);
   setupExecutorEventAction();
 
+  function getIdForJobEntry(baseElem) {
+var jobIdText = 
$($(baseElem).find(".application-timeline-content")[0]).text();
+var jobId = jobIdText.match("\\(Job (\\d+)\\)$")[1];
+return jobId;
+  }
+
+  function getSelectorForJobEntry(jobId) {
+return "#job-" + jobId;
+  }
+
   function setupJobEventAction() {
 $(".vis-item.vis-range.job.application-timeline-object").each(function() {
-  var getSelectorForJobEntry = function(baseElem) {
-var jobIdText = 
$($(baseElem).find(".application-timeline-content")[0]).text();
-var jobId = jobIdText.match("\\(Job (\\d+)\\)$")[1];
-   return "#job-" + jobId;
-  };
-
   $(this).click(function() {
-var jobPagePath = 
$(getSelectorForJobEntry(this)).find("a.name-link").attr("href");
-  window.location.href = jobPagePath
+var jobId = getIdForJobEntry(this);
+var jobPagePath = uiRoot + appBasePath + "/jobs/job/?id=" + jobId;
+window.location.href = jobPagePath;
   });
 
   $(this).hover(
 function() {
-  $(getSelectorForJobEntry(this)).addClass("corresponding-item-hover");
+  
$(getSelectorForJobEntry(getIdForJobEntry(this))).addClass("corresponding-item-hover");
   
$($(this).find("div.application-timeline-content")[0]).tooltip("show");
 },
 function() {
-  
$(getSelectorForJobEntry(this)).removeClass("corresponding-item-hover");
+  
$(getSelectorForJobEntry(getIdForJobEntry(this))).removeClass("corresponding-item-hover");
   
$($(this).find("div.application-timeline-content")[0]).tooltip("hide");
 }
   );
@@ -125,26 +130,34 @@ function drawJobTimeline(groupArray, eventObjArray, 
startTime, offset) {
   setupZoomable("#job-timeline-zoom-lock", jobTimeline);
   setupExecutorEventAction();
 
+  function getStageIdAndAttemptForStageEntry(baseElem) {
+var stageIdText = $($(baseElem).find(".job-timeline-content")[0]).text();
+var stageIdAndAttempt = stageId

[spark] branch master updated (c336ddf -> d01594e)

2020-09-21 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from c336ddf  [SPARK-32867][SQL] When explain, HiveTableRelation show 
limited message
 add d01594e  [SPARK-32886][WEBUI] fix 'undefined' link in event timeline 
view

No new revisions were added by this update.

Summary of changes:
 .../org/apache/spark/ui/static/timeline-view.js| 53 ++
 1 file changed, 33 insertions(+), 20 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (c336ddf -> d01594e)

2020-09-21 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from c336ddf  [SPARK-32867][SQL] When explain, HiveTableRelation show 
limited message
 add d01594e  [SPARK-32886][WEBUI] fix 'undefined' link in event timeline 
view

No new revisions were added by this update.

Summary of changes:
 .../org/apache/spark/ui/static/timeline-view.js| 53 ++
 1 file changed, 33 insertions(+), 20 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (664a171 -> 2128c4f)

2020-09-18 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 664a171  [SPARK-32936][SQL] Pass all `external/avro` module UTs in 
Scala 2.13
 add 2128c4f  [SPARK-32808][SQL] Pass all test of sql/core module in Scala 
2.13

No new revisions were added by this update.

Summary of changes:
 .../catalyst/optimizer/CostBasedJoinReorder.scala  |  19 +-
 .../StarJoinCostBasedReorderSuite.scala|   2 +-
 .../approved-plans-modified/q27.sf100/explain.txt  | 210 ++---
 .../q27.sf100/simplified.txt   |  22 +-
 .../approved-plans-modified/q7.sf100/explain.txt   | 108 +++
 .../q7.sf100/simplified.txt|  10 +-
 .../approved-plans-v1_4/q13.sf100/explain.txt  | 112 +++
 .../approved-plans-v1_4/q13.sf100/simplified.txt   |  12 +-
 .../approved-plans-v1_4/q17.sf100/explain.txt  | 120 
 .../approved-plans-v1_4/q17.sf100/simplified.txt   |  10 +-
 .../approved-plans-v1_4/q19.sf100/explain.txt  | 204 ++---
 .../approved-plans-v1_4/q19.sf100/simplified.txt   |  36 +--
 .../approved-plans-v1_4/q24a.sf100/explain.txt |  94 +++---
 .../approved-plans-v1_4/q24a.sf100/simplified.txt  |  18 +-
 .../approved-plans-v1_4/q24b.sf100/explain.txt |  94 +++---
 .../approved-plans-v1_4/q24b.sf100/simplified.txt  |  18 +-
 .../approved-plans-v1_4/q25.sf100/explain.txt  | 120 
 .../approved-plans-v1_4/q25.sf100/simplified.txt   |  10 +-
 .../approved-plans-v1_4/q29.sf100/explain.txt  | 118 
 .../approved-plans-v1_4/q29.sf100/simplified.txt   |  10 +-
 .../approved-plans-v1_4/q31.sf100/explain.txt  |  40 +--
 .../approved-plans-v1_4/q31.sf100/simplified.txt   |  12 +-
 .../approved-plans-v1_4/q45.sf100/explain.txt  | 102 +++
 .../approved-plans-v1_4/q45.sf100/simplified.txt   |  20 +-
 .../approved-plans-v1_4/q50.sf100/explain.txt  | 104 +++
 .../approved-plans-v1_4/q50.sf100/simplified.txt   |  10 +-
 .../approved-plans-v1_4/q6.sf100/explain.txt   | 224 +++---
 .../approved-plans-v1_4/q6.sf100/simplified.txt|  74 ++---
 .../approved-plans-v1_4/q61.sf100/explain.txt  | 127 +++-
 .../approved-plans-v1_4/q61.sf100/simplified.txt   |  17 +-
 .../approved-plans-v1_4/q62.sf100/explain.txt  | 108 +++
 .../approved-plans-v1_4/q62.sf100/simplified.txt   |  10 +-
 .../approved-plans-v1_4/q66.sf100/explain.txt  | 136 -
 .../approved-plans-v1_4/q66.sf100/simplified.txt   |  16 +-
 .../approved-plans-v1_4/q72.sf100/explain.txt  | 334 ++---
 .../approved-plans-v1_4/q72.sf100/simplified.txt   |  34 +--
 .../approved-plans-v1_4/q80.sf100/explain.txt  |  98 +++---
 .../approved-plans-v1_4/q80.sf100/simplified.txt   |  38 +--
 .../approved-plans-v1_4/q84.sf100/explain.txt  |  86 +++---
 .../approved-plans-v1_4/q84.sf100/simplified.txt   |  10 +-
 .../approved-plans-v1_4/q85.sf100/explain.txt  | 304 +--
 .../approved-plans-v1_4/q85.sf100/simplified.txt   |  44 +--
 .../approved-plans-v1_4/q91.sf100/explain.txt  | 202 ++---
 .../approved-plans-v1_4/q91.sf100/simplified.txt   |  26 +-
 .../approved-plans-v1_4/q99.sf100/explain.txt  | 108 +++
 .../approved-plans-v1_4/q99.sf100/simplified.txt   |  10 +-
 .../approved-plans-v2_7/q6.sf100/explain.txt   | 224 +++---
 .../approved-plans-v2_7/q6.sf100/simplified.txt|  74 ++---
 .../approved-plans-v2_7/q72.sf100/explain.txt  | 334 ++---
 .../approved-plans-v2_7/q72.sf100/simplified.txt   |  34 +--
 .../approved-plans-v2_7/q80a.sf100/explain.txt |  98 +++---
 .../approved-plans-v2_7/q80a.sf100/simplified.txt  |  38 +--
 52 files changed, 2204 insertions(+), 2239 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (664a171 -> 2128c4f)

2020-09-18 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 664a171  [SPARK-32936][SQL] Pass all `external/avro` module UTs in 
Scala 2.13
 add 2128c4f  [SPARK-32808][SQL] Pass all test of sql/core module in Scala 
2.13

No new revisions were added by this update.

Summary of changes:
 .../catalyst/optimizer/CostBasedJoinReorder.scala  |  19 +-
 .../StarJoinCostBasedReorderSuite.scala|   2 +-
 .../approved-plans-modified/q27.sf100/explain.txt  | 210 ++---
 .../q27.sf100/simplified.txt   |  22 +-
 .../approved-plans-modified/q7.sf100/explain.txt   | 108 +++
 .../q7.sf100/simplified.txt|  10 +-
 .../approved-plans-v1_4/q13.sf100/explain.txt  | 112 +++
 .../approved-plans-v1_4/q13.sf100/simplified.txt   |  12 +-
 .../approved-plans-v1_4/q17.sf100/explain.txt  | 120 
 .../approved-plans-v1_4/q17.sf100/simplified.txt   |  10 +-
 .../approved-plans-v1_4/q19.sf100/explain.txt  | 204 ++---
 .../approved-plans-v1_4/q19.sf100/simplified.txt   |  36 +--
 .../approved-plans-v1_4/q24a.sf100/explain.txt |  94 +++---
 .../approved-plans-v1_4/q24a.sf100/simplified.txt  |  18 +-
 .../approved-plans-v1_4/q24b.sf100/explain.txt |  94 +++---
 .../approved-plans-v1_4/q24b.sf100/simplified.txt  |  18 +-
 .../approved-plans-v1_4/q25.sf100/explain.txt  | 120 
 .../approved-plans-v1_4/q25.sf100/simplified.txt   |  10 +-
 .../approved-plans-v1_4/q29.sf100/explain.txt  | 118 
 .../approved-plans-v1_4/q29.sf100/simplified.txt   |  10 +-
 .../approved-plans-v1_4/q31.sf100/explain.txt  |  40 +--
 .../approved-plans-v1_4/q31.sf100/simplified.txt   |  12 +-
 .../approved-plans-v1_4/q45.sf100/explain.txt  | 102 +++
 .../approved-plans-v1_4/q45.sf100/simplified.txt   |  20 +-
 .../approved-plans-v1_4/q50.sf100/explain.txt  | 104 +++
 .../approved-plans-v1_4/q50.sf100/simplified.txt   |  10 +-
 .../approved-plans-v1_4/q6.sf100/explain.txt   | 224 +++---
 .../approved-plans-v1_4/q6.sf100/simplified.txt|  74 ++---
 .../approved-plans-v1_4/q61.sf100/explain.txt  | 127 +++-
 .../approved-plans-v1_4/q61.sf100/simplified.txt   |  17 +-
 .../approved-plans-v1_4/q62.sf100/explain.txt  | 108 +++
 .../approved-plans-v1_4/q62.sf100/simplified.txt   |  10 +-
 .../approved-plans-v1_4/q66.sf100/explain.txt  | 136 -
 .../approved-plans-v1_4/q66.sf100/simplified.txt   |  16 +-
 .../approved-plans-v1_4/q72.sf100/explain.txt  | 334 ++---
 .../approved-plans-v1_4/q72.sf100/simplified.txt   |  34 +--
 .../approved-plans-v1_4/q80.sf100/explain.txt  |  98 +++---
 .../approved-plans-v1_4/q80.sf100/simplified.txt   |  38 +--
 .../approved-plans-v1_4/q84.sf100/explain.txt  |  86 +++---
 .../approved-plans-v1_4/q84.sf100/simplified.txt   |  10 +-
 .../approved-plans-v1_4/q85.sf100/explain.txt  | 304 +--
 .../approved-plans-v1_4/q85.sf100/simplified.txt   |  44 +--
 .../approved-plans-v1_4/q91.sf100/explain.txt  | 202 ++---
 .../approved-plans-v1_4/q91.sf100/simplified.txt   |  26 +-
 .../approved-plans-v1_4/q99.sf100/explain.txt  | 108 +++
 .../approved-plans-v1_4/q99.sf100/simplified.txt   |  10 +-
 .../approved-plans-v2_7/q6.sf100/explain.txt   | 224 +++---
 .../approved-plans-v2_7/q6.sf100/simplified.txt|  74 ++---
 .../approved-plans-v2_7/q72.sf100/explain.txt  | 334 ++---
 .../approved-plans-v2_7/q72.sf100/simplified.txt   |  34 +--
 .../approved-plans-v2_7/q80a.sf100/explain.txt |  98 +++---
 .../approved-plans-v2_7/q80a.sf100/simplified.txt  |  38 +--
 52 files changed, 2204 insertions(+), 2239 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (664a171 -> 2128c4f)

2020-09-18 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 664a171  [SPARK-32936][SQL] Pass all `external/avro` module UTs in 
Scala 2.13
 add 2128c4f  [SPARK-32808][SQL] Pass all test of sql/core module in Scala 
2.13

No new revisions were added by this update.

Summary of changes:
 .../catalyst/optimizer/CostBasedJoinReorder.scala  |  19 +-
 .../StarJoinCostBasedReorderSuite.scala|   2 +-
 .../approved-plans-modified/q27.sf100/explain.txt  | 210 ++---
 .../q27.sf100/simplified.txt   |  22 +-
 .../approved-plans-modified/q7.sf100/explain.txt   | 108 +++
 .../q7.sf100/simplified.txt|  10 +-
 .../approved-plans-v1_4/q13.sf100/explain.txt  | 112 +++
 .../approved-plans-v1_4/q13.sf100/simplified.txt   |  12 +-
 .../approved-plans-v1_4/q17.sf100/explain.txt  | 120 
 .../approved-plans-v1_4/q17.sf100/simplified.txt   |  10 +-
 .../approved-plans-v1_4/q19.sf100/explain.txt  | 204 ++---
 .../approved-plans-v1_4/q19.sf100/simplified.txt   |  36 +--
 .../approved-plans-v1_4/q24a.sf100/explain.txt |  94 +++---
 .../approved-plans-v1_4/q24a.sf100/simplified.txt  |  18 +-
 .../approved-plans-v1_4/q24b.sf100/explain.txt |  94 +++---
 .../approved-plans-v1_4/q24b.sf100/simplified.txt  |  18 +-
 .../approved-plans-v1_4/q25.sf100/explain.txt  | 120 
 .../approved-plans-v1_4/q25.sf100/simplified.txt   |  10 +-
 .../approved-plans-v1_4/q29.sf100/explain.txt  | 118 
 .../approved-plans-v1_4/q29.sf100/simplified.txt   |  10 +-
 .../approved-plans-v1_4/q31.sf100/explain.txt  |  40 +--
 .../approved-plans-v1_4/q31.sf100/simplified.txt   |  12 +-
 .../approved-plans-v1_4/q45.sf100/explain.txt  | 102 +++
 .../approved-plans-v1_4/q45.sf100/simplified.txt   |  20 +-
 .../approved-plans-v1_4/q50.sf100/explain.txt  | 104 +++
 .../approved-plans-v1_4/q50.sf100/simplified.txt   |  10 +-
 .../approved-plans-v1_4/q6.sf100/explain.txt   | 224 +++---
 .../approved-plans-v1_4/q6.sf100/simplified.txt|  74 ++---
 .../approved-plans-v1_4/q61.sf100/explain.txt  | 127 +++-
 .../approved-plans-v1_4/q61.sf100/simplified.txt   |  17 +-
 .../approved-plans-v1_4/q62.sf100/explain.txt  | 108 +++
 .../approved-plans-v1_4/q62.sf100/simplified.txt   |  10 +-
 .../approved-plans-v1_4/q66.sf100/explain.txt  | 136 -
 .../approved-plans-v1_4/q66.sf100/simplified.txt   |  16 +-
 .../approved-plans-v1_4/q72.sf100/explain.txt  | 334 ++---
 .../approved-plans-v1_4/q72.sf100/simplified.txt   |  34 +--
 .../approved-plans-v1_4/q80.sf100/explain.txt  |  98 +++---
 .../approved-plans-v1_4/q80.sf100/simplified.txt   |  38 +--
 .../approved-plans-v1_4/q84.sf100/explain.txt  |  86 +++---
 .../approved-plans-v1_4/q84.sf100/simplified.txt   |  10 +-
 .../approved-plans-v1_4/q85.sf100/explain.txt  | 304 +--
 .../approved-plans-v1_4/q85.sf100/simplified.txt   |  44 +--
 .../approved-plans-v1_4/q91.sf100/explain.txt  | 202 ++---
 .../approved-plans-v1_4/q91.sf100/simplified.txt   |  26 +-
 .../approved-plans-v1_4/q99.sf100/explain.txt  | 108 +++
 .../approved-plans-v1_4/q99.sf100/simplified.txt   |  10 +-
 .../approved-plans-v2_7/q6.sf100/explain.txt   | 224 +++---
 .../approved-plans-v2_7/q6.sf100/simplified.txt|  74 ++---
 .../approved-plans-v2_7/q72.sf100/explain.txt  | 334 ++---
 .../approved-plans-v2_7/q72.sf100/simplified.txt   |  34 +--
 .../approved-plans-v2_7/q80a.sf100/explain.txt |  98 +++---
 .../approved-plans-v2_7/q80a.sf100/simplified.txt  |  38 +--
 52 files changed, 2204 insertions(+), 2239 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (664a171 -> 2128c4f)

2020-09-18 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 664a171  [SPARK-32936][SQL] Pass all `external/avro` module UTs in 
Scala 2.13
 add 2128c4f  [SPARK-32808][SQL] Pass all test of sql/core module in Scala 
2.13

No new revisions were added by this update.

Summary of changes:
 .../catalyst/optimizer/CostBasedJoinReorder.scala  |  19 +-
 .../StarJoinCostBasedReorderSuite.scala|   2 +-
 .../approved-plans-modified/q27.sf100/explain.txt  | 210 ++---
 .../q27.sf100/simplified.txt   |  22 +-
 .../approved-plans-modified/q7.sf100/explain.txt   | 108 +++
 .../q7.sf100/simplified.txt|  10 +-
 .../approved-plans-v1_4/q13.sf100/explain.txt  | 112 +++
 .../approved-plans-v1_4/q13.sf100/simplified.txt   |  12 +-
 .../approved-plans-v1_4/q17.sf100/explain.txt  | 120 
 .../approved-plans-v1_4/q17.sf100/simplified.txt   |  10 +-
 .../approved-plans-v1_4/q19.sf100/explain.txt  | 204 ++---
 .../approved-plans-v1_4/q19.sf100/simplified.txt   |  36 +--
 .../approved-plans-v1_4/q24a.sf100/explain.txt |  94 +++---
 .../approved-plans-v1_4/q24a.sf100/simplified.txt  |  18 +-
 .../approved-plans-v1_4/q24b.sf100/explain.txt |  94 +++---
 .../approved-plans-v1_4/q24b.sf100/simplified.txt  |  18 +-
 .../approved-plans-v1_4/q25.sf100/explain.txt  | 120 
 .../approved-plans-v1_4/q25.sf100/simplified.txt   |  10 +-
 .../approved-plans-v1_4/q29.sf100/explain.txt  | 118 
 .../approved-plans-v1_4/q29.sf100/simplified.txt   |  10 +-
 .../approved-plans-v1_4/q31.sf100/explain.txt  |  40 +--
 .../approved-plans-v1_4/q31.sf100/simplified.txt   |  12 +-
 .../approved-plans-v1_4/q45.sf100/explain.txt  | 102 +++
 .../approved-plans-v1_4/q45.sf100/simplified.txt   |  20 +-
 .../approved-plans-v1_4/q50.sf100/explain.txt  | 104 +++
 .../approved-plans-v1_4/q50.sf100/simplified.txt   |  10 +-
 .../approved-plans-v1_4/q6.sf100/explain.txt   | 224 +++---
 .../approved-plans-v1_4/q6.sf100/simplified.txt|  74 ++---
 .../approved-plans-v1_4/q61.sf100/explain.txt  | 127 +++-
 .../approved-plans-v1_4/q61.sf100/simplified.txt   |  17 +-
 .../approved-plans-v1_4/q62.sf100/explain.txt  | 108 +++
 .../approved-plans-v1_4/q62.sf100/simplified.txt   |  10 +-
 .../approved-plans-v1_4/q66.sf100/explain.txt  | 136 -
 .../approved-plans-v1_4/q66.sf100/simplified.txt   |  16 +-
 .../approved-plans-v1_4/q72.sf100/explain.txt  | 334 ++---
 .../approved-plans-v1_4/q72.sf100/simplified.txt   |  34 +--
 .../approved-plans-v1_4/q80.sf100/explain.txt  |  98 +++---
 .../approved-plans-v1_4/q80.sf100/simplified.txt   |  38 +--
 .../approved-plans-v1_4/q84.sf100/explain.txt  |  86 +++---
 .../approved-plans-v1_4/q84.sf100/simplified.txt   |  10 +-
 .../approved-plans-v1_4/q85.sf100/explain.txt  | 304 +--
 .../approved-plans-v1_4/q85.sf100/simplified.txt   |  44 +--
 .../approved-plans-v1_4/q91.sf100/explain.txt  | 202 ++---
 .../approved-plans-v1_4/q91.sf100/simplified.txt   |  26 +-
 .../approved-plans-v1_4/q99.sf100/explain.txt  | 108 +++
 .../approved-plans-v1_4/q99.sf100/simplified.txt   |  10 +-
 .../approved-plans-v2_7/q6.sf100/explain.txt   | 224 +++---
 .../approved-plans-v2_7/q6.sf100/simplified.txt|  74 ++---
 .../approved-plans-v2_7/q72.sf100/explain.txt  | 334 ++---
 .../approved-plans-v2_7/q72.sf100/simplified.txt   |  34 +--
 .../approved-plans-v2_7/q80a.sf100/explain.txt |  98 +++---
 .../approved-plans-v2_7/q80a.sf100/simplified.txt  |  38 +--
 52 files changed, 2204 insertions(+), 2239 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (664a171 -> 2128c4f)

2020-09-18 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 664a171  [SPARK-32936][SQL] Pass all `external/avro` module UTs in 
Scala 2.13
 add 2128c4f  [SPARK-32808][SQL] Pass all test of sql/core module in Scala 
2.13

No new revisions were added by this update.

Summary of changes:
 .../catalyst/optimizer/CostBasedJoinReorder.scala  |  19 +-
 .../StarJoinCostBasedReorderSuite.scala|   2 +-
 .../approved-plans-modified/q27.sf100/explain.txt  | 210 ++---
 .../q27.sf100/simplified.txt   |  22 +-
 .../approved-plans-modified/q7.sf100/explain.txt   | 108 +++
 .../q7.sf100/simplified.txt|  10 +-
 .../approved-plans-v1_4/q13.sf100/explain.txt  | 112 +++
 .../approved-plans-v1_4/q13.sf100/simplified.txt   |  12 +-
 .../approved-plans-v1_4/q17.sf100/explain.txt  | 120 
 .../approved-plans-v1_4/q17.sf100/simplified.txt   |  10 +-
 .../approved-plans-v1_4/q19.sf100/explain.txt  | 204 ++---
 .../approved-plans-v1_4/q19.sf100/simplified.txt   |  36 +--
 .../approved-plans-v1_4/q24a.sf100/explain.txt |  94 +++---
 .../approved-plans-v1_4/q24a.sf100/simplified.txt  |  18 +-
 .../approved-plans-v1_4/q24b.sf100/explain.txt |  94 +++---
 .../approved-plans-v1_4/q24b.sf100/simplified.txt  |  18 +-
 .../approved-plans-v1_4/q25.sf100/explain.txt  | 120 
 .../approved-plans-v1_4/q25.sf100/simplified.txt   |  10 +-
 .../approved-plans-v1_4/q29.sf100/explain.txt  | 118 
 .../approved-plans-v1_4/q29.sf100/simplified.txt   |  10 +-
 .../approved-plans-v1_4/q31.sf100/explain.txt  |  40 +--
 .../approved-plans-v1_4/q31.sf100/simplified.txt   |  12 +-
 .../approved-plans-v1_4/q45.sf100/explain.txt  | 102 +++
 .../approved-plans-v1_4/q45.sf100/simplified.txt   |  20 +-
 .../approved-plans-v1_4/q50.sf100/explain.txt  | 104 +++
 .../approved-plans-v1_4/q50.sf100/simplified.txt   |  10 +-
 .../approved-plans-v1_4/q6.sf100/explain.txt   | 224 +++---
 .../approved-plans-v1_4/q6.sf100/simplified.txt|  74 ++---
 .../approved-plans-v1_4/q61.sf100/explain.txt  | 127 +++-
 .../approved-plans-v1_4/q61.sf100/simplified.txt   |  17 +-
 .../approved-plans-v1_4/q62.sf100/explain.txt  | 108 +++
 .../approved-plans-v1_4/q62.sf100/simplified.txt   |  10 +-
 .../approved-plans-v1_4/q66.sf100/explain.txt  | 136 -
 .../approved-plans-v1_4/q66.sf100/simplified.txt   |  16 +-
 .../approved-plans-v1_4/q72.sf100/explain.txt  | 334 ++---
 .../approved-plans-v1_4/q72.sf100/simplified.txt   |  34 +--
 .../approved-plans-v1_4/q80.sf100/explain.txt  |  98 +++---
 .../approved-plans-v1_4/q80.sf100/simplified.txt   |  38 +--
 .../approved-plans-v1_4/q84.sf100/explain.txt  |  86 +++---
 .../approved-plans-v1_4/q84.sf100/simplified.txt   |  10 +-
 .../approved-plans-v1_4/q85.sf100/explain.txt  | 304 +--
 .../approved-plans-v1_4/q85.sf100/simplified.txt   |  44 +--
 .../approved-plans-v1_4/q91.sf100/explain.txt  | 202 ++---
 .../approved-plans-v1_4/q91.sf100/simplified.txt   |  26 +-
 .../approved-plans-v1_4/q99.sf100/explain.txt  | 108 +++
 .../approved-plans-v1_4/q99.sf100/simplified.txt   |  10 +-
 .../approved-plans-v2_7/q6.sf100/explain.txt   | 224 +++---
 .../approved-plans-v2_7/q6.sf100/simplified.txt|  74 ++---
 .../approved-plans-v2_7/q72.sf100/explain.txt  | 334 ++---
 .../approved-plans-v2_7/q72.sf100/simplified.txt   |  34 +--
 .../approved-plans-v2_7/q80a.sf100/explain.txt |  98 +++---
 .../approved-plans-v2_7/q80a.sf100/simplified.txt  |  38 +--
 52 files changed, 2204 insertions(+), 2239 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (657e39a -> 7fdb571)

2020-09-16 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 657e39a  [SPARK-32897][PYTHON] Don't show a deprecation warning at 
SparkSession.builder.getOrCreate
 add 7fdb571  [SPARK-32890][SQL] Pass all `sql/hive` module UTs in Scala 
2.13

No new revisions were added by this update.

Summary of changes:
 .../resources/regression-test-SPARK-8489/test-2.13.jar  | Bin 0 -> 19579 bytes
 .../spark/sql/hive/HiveSchemaInferenceSuite.scala   |   2 +-
 .../apache/spark/sql/hive/HiveSparkSubmitSuite.scala|   2 +-
 .../org/apache/spark/sql/hive/StatisticsSuite.scala |   2 +-
 .../apache/spark/sql/hive/execution/HiveDDLSuite.scala  |   2 +-
 5 files changed, 4 insertions(+), 4 deletions(-)
 create mode 100644 
sql/hive/src/test/resources/regression-test-SPARK-8489/test-2.13.jar


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (657e39a -> 7fdb571)

2020-09-16 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 657e39a  [SPARK-32897][PYTHON] Don't show a deprecation warning at 
SparkSession.builder.getOrCreate
 add 7fdb571  [SPARK-32890][SQL] Pass all `sql/hive` module UTs in Scala 
2.13

No new revisions were added by this update.

Summary of changes:
 .../resources/regression-test-SPARK-8489/test-2.13.jar  | Bin 0 -> 19579 bytes
 .../spark/sql/hive/HiveSchemaInferenceSuite.scala   |   2 +-
 .../apache/spark/sql/hive/HiveSparkSubmitSuite.scala|   2 +-
 .../org/apache/spark/sql/hive/StatisticsSuite.scala |   2 +-
 .../apache/spark/sql/hive/execution/HiveDDLSuite.scala  |   2 +-
 5 files changed, 4 insertions(+), 4 deletions(-)
 create mode 100644 
sql/hive/src/test/resources/regression-test-SPARK-8489/test-2.13.jar


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (657e39a -> 7fdb571)

2020-09-16 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 657e39a  [SPARK-32897][PYTHON] Don't show a deprecation warning at 
SparkSession.builder.getOrCreate
 add 7fdb571  [SPARK-32890][SQL] Pass all `sql/hive` module UTs in Scala 
2.13

No new revisions were added by this update.

Summary of changes:
 .../resources/regression-test-SPARK-8489/test-2.13.jar  | Bin 0 -> 19579 bytes
 .../spark/sql/hive/HiveSchemaInferenceSuite.scala   |   2 +-
 .../apache/spark/sql/hive/HiveSparkSubmitSuite.scala|   2 +-
 .../org/apache/spark/sql/hive/StatisticsSuite.scala |   2 +-
 .../apache/spark/sql/hive/execution/HiveDDLSuite.scala  |   2 +-
 5 files changed, 4 insertions(+), 4 deletions(-)
 create mode 100644 
sql/hive/src/test/resources/regression-test-SPARK-8489/test-2.13.jar


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (657e39a -> 7fdb571)

2020-09-16 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 657e39a  [SPARK-32897][PYTHON] Don't show a deprecation warning at 
SparkSession.builder.getOrCreate
 add 7fdb571  [SPARK-32890][SQL] Pass all `sql/hive` module UTs in Scala 
2.13

No new revisions were added by this update.

Summary of changes:
 .../resources/regression-test-SPARK-8489/test-2.13.jar  | Bin 0 -> 19579 bytes
 .../spark/sql/hive/HiveSchemaInferenceSuite.scala   |   2 +-
 .../apache/spark/sql/hive/HiveSparkSubmitSuite.scala|   2 +-
 .../org/apache/spark/sql/hive/StatisticsSuite.scala |   2 +-
 .../apache/spark/sql/hive/execution/HiveDDLSuite.scala  |   2 +-
 5 files changed, 4 insertions(+), 4 deletions(-)
 create mode 100644 
sql/hive/src/test/resources/regression-test-SPARK-8489/test-2.13.jar


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



<    4   5   6   7   8   9   10   11   12   13   >