Repository: spark
Updated Branches:
refs/heads/master ab6f60c4d -> 92cfbeeb5
[SPARK-21866][ML][PYTHON][FOLLOWUP] Few cleanups and fix image test failure in
Python 3.6.0 / NumPy 1.13.3
## What changes were proposed in this pull request?
Image test seems failed in Python 3.6.0 / NumPy 1.13.3.
Repository: spark
Updated Branches:
refs/heads/master 8ff474f6e -> ab6f60c4d
[SPARK-22585][CORE] Path in addJar is not url encoded
## What changes were proposed in this pull request?
This updates a behavior of `addJar` method of `sparkContext` class. If path
without any scheme is passed as
Repository: spark
Updated Branches:
refs/heads/master 193555f79 -> 8ff474f6e
http://git-wip-us.apache.org/repos/asf/spark/blob/8ff474f6/core/src/test/scala/org/apache/spark/ui/jobs/JobProgressListenerSuite.scala
--
diff --git
[SPARK-20650][CORE] Remove JobProgressListener.
The only remaining use of this class was the SparkStatusTracker, which
was modified to use the new status store. The test code to wait for
executors was moved to TestUtils and now uses the SparkStatusTracker API.
Indirectly, ConsoleProgressBar also
Repository: spark
Updated Branches:
refs/heads/master 284836862 -> 193555f79
[SPARK-18935][MESOS] Fix dynamic reservations on mesos
## What changes were proposed in this pull request?
- Solves the issue described in the ticket by preserving reservation and
allocation info in all cases (port
Repository: spark
Updated Branches:
refs/heads/master 57687280d -> 284836862
[SPARK-22608][SQL] add new API to CodeGeneration.splitExpressions()
## What changes were proposed in this pull request?
This PR adds a new API to ` CodeGenenerator.splitExpression` since since
several `
Repository: spark
Updated Branches:
refs/heads/master 20b239845 -> 57687280d
[SPARK-22615][SQL] Handle more cases in PropagateEmptyRelation
## What changes were proposed in this pull request?
Currently, in the optimize rule `PropagateEmptyRelation`, the following cases
is not handled:
1.
Repository: spark
Updated Branches:
refs/heads/master e9b2070ab -> 20b239845
[SPARK-22605][SQL] SQL write job should also set Spark task output metrics
## What changes were proposed in this pull request?
For SQL write jobs, we only set metrics for the SQL listener and display them
in the