[spark] branch master updated (ed12b61 -> 6233958)

2019-11-05 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from ed12b61  [SPARK-29656][ML][PYSPARK] ML algs expose aggregationDepth
 add 6233958  [SPARK-29680][SQL] Remove ALTER TABLE CHANGE COLUMN syntax

No new revisions were added by this update.

Summary of changes:
 .../apache/spark/sql/catalyst/parser/SqlBase.g4|   5 +-
 .../spark/sql/catalyst/parser/AstBuilder.scala |   7 +-
 .../sql/catalyst/parser/ErrorParserSuite.scala |   2 +-
 .../spark/sql/execution/SparkSqlParser.scala   |  27 ---
 .../resources/sql-tests/inputs/change-column.sql   |  45 ++---
 .../sql-tests/results/change-column.sql.out| 212 +++--
 .../sql/execution/command/DDLParserSuite.scala |  28 ---
 .../spark/sql/execution/command/DDLSuite.scala |   2 +-
 8 files changed, 89 insertions(+), 239 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (20b9d82 -> ed12b61)

2019-11-05 Thread ruifengz
This is an automated email from the ASF dual-hosted git repository.

ruifengz pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 20b9d82  [SPARK-29714][SQL][TESTS] Port insert.sql
 add ed12b61  [SPARK-29656][ML][PYSPARK] ML algs expose aggregationDepth

No new revisions were added by this update.

Summary of changes:
 .../spark/ml/clustering/GaussianMixture.scala  | 12 ---
 .../apache/spark/ml/feature/VectorIndexer.scala|  2 +-
 .../optim/IterativelyReweightedLeastSquares.scala  |  5 +++--
 .../spark/ml/optim/WeightedLeastSquares.scala  |  6 --
 .../regression/GeneralizedLinearRegression.scala   | 19 ++--
 python/pyspark/ml/clustering.py| 25 --
 python/pyspark/ml/regression.py| 22 +--
 7 files changed, 65 insertions(+), 26 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (075cd55 -> 20b9d82)

2019-11-05 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 075cd55  [SPARK-29763] Fix Stage UI Page not showing all accumulators 
in Task Table
 add 20b9d82  [SPARK-29714][SQL][TESTS] Port insert.sql

No new revisions were added by this update.

Summary of changes:
 .../sql-tests/inputs/postgreSQL/insert.sql | 653 +
 .../sql-tests/results/postgreSQL/insert.sql.out|  81 +++
 2 files changed, 734 insertions(+)
 create mode 100644 
sql/core/src/test/resources/sql-tests/inputs/postgreSQL/insert.sql
 create mode 100644 
sql/core/src/test/resources/sql-tests/results/postgreSQL/insert.sql.out


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (4c53ac1 -> 075cd55)

2019-11-05 Thread vanzin
This is an automated email from the ASF dual-hosted git repository.

vanzin pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 4c53ac1  [SPARK-29387][SQL] Support `*` and `/` operators for intervals
 add 075cd55  [SPARK-29763] Fix Stage UI Page not showing all accumulators 
in Task Table

No new revisions were added by this update.

Summary of changes:
 core/src/main/resources/org/apache/spark/ui/static/stagepage.js | 7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (3cb18d9 -> 4c53ac1)

2019-11-05 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 3cb18d9  [SPARK-29151][CORE] Support fractional resources for task 
resource scheduling
 add 4c53ac1  [SPARK-29387][SQL] Support `*` and `/` operators for intervals

No new revisions were added by this update.

Summary of changes:
 .../spark/sql/catalyst/analysis/TypeCoercion.scala |  8 +++
 .../catalyst/expressions/intervalExpressions.scala | 70 ++
 .../spark/sql/catalyst/util/IntervalUtils.scala| 23 +++
 .../sql/catalyst/analysis/TypeCoercionSuite.scala  | 21 +++
 .../expressions/IntervalExpressionsSuite.scala | 39 +++-
 .../sql/catalyst/util/IntervalUtilsSuite.scala | 47 +--
 .../test/resources/sql-tests/inputs/datetime.sql   |  5 ++
 .../resources/sql-tests/results/datetime.sql.out   | 26 +++-
 8 files changed, 219 insertions(+), 20 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated: [SPARK-29151][CORE] Support fractional resources for task resource scheduling

2019-11-05 Thread tgraves
This is an automated email from the ASF dual-hosted git repository.

tgraves pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new 3cb18d9  [SPARK-29151][CORE] Support fractional resources for task 
resource scheduling
3cb18d9 is described below

commit 3cb18d90c441bbaa64c693e276793b670213e599
Author: Alessandro Bellina 
AuthorDate: Tue Nov 5 08:57:43 2019 -0600

[SPARK-29151][CORE] Support fractional resources for task resource 
scheduling

### What changes were proposed in this pull request?
This PR adds the ability for tasks to request fractional resources, in 
order to be able to execute more than 1 task per resource. For example, if you 
have 1 GPU in the executor, and the task configuration is 0.5 GPU/task, the 
executor can schedule two tasks to run on that 1 GPU.

### Why are the changes needed?
Currently there is no good way to share a resource such that multiple tasks 
can run on a single unit. This allows multiple tasks to share an executor 
resource.

### Does this PR introduce any user-facing change?
Yes: There is a configuration change where `spark.task.resource.[resource 
type].amount` can now be fractional.

### How was this patch tested?
Unit tests and manually on standalone mode, and yarn.

Closes #26078 from abellina/SPARK-29151.

Authored-by: Alessandro Bellina 
Signed-off-by: Thomas Graves 
---
 .../main/scala/org/apache/spark/SparkContext.scala | 21 ++--
 .../apache/spark/deploy/master/WorkerInfo.scala|  1 +
 .../apache/spark/resource/ResourceAllocator.scala  | 39 +++
 .../org/apache/spark/resource/ResourceUtils.scala  | 58 --
 .../spark/scheduler/ExecutorResourceInfo.scala |  7 ++-
 .../cluster/CoarseGrainedSchedulerBackend.scala| 15 +-
 .../org/apache/spark/HeartbeatReceiverSuite.scala  |  1 +
 .../scala/org/apache/spark/SparkConfSuite.scala| 51 +++
 .../scala/org/apache/spark/SparkContextSuite.scala |  3 +-
 .../deploy/StandaloneDynamicAllocationSuite.scala  |  1 +
 .../CoarseGrainedSchedulerBackendSuite.scala   |  1 +
 .../scheduler/ExecutorResourceInfoSuite.scala  | 34 +++--
 docs/configuration.md  | 12 +++--
 13 files changed, 214 insertions(+), 30 deletions(-)

diff --git a/core/src/main/scala/org/apache/spark/SparkContext.scala 
b/core/src/main/scala/org/apache/spark/SparkContext.scala
index cad88ad..3cea2ef 100644
--- a/core/src/main/scala/org/apache/spark/SparkContext.scala
+++ b/core/src/main/scala/org/apache/spark/SparkContext.scala
@@ -2799,7 +2799,10 @@ object SparkContext extends Logging {
 s" = ${taskReq.amount}")
 }
 // Compare and update the max slots each executor can provide.
-val resourceNumSlots = execAmount / taskReq.amount
+// If the configured amount per task was < 1.0, a task is subdividing
+// executor resources. If the amount per task was > 1.0, the task wants
+// multiple executor resources.
+val resourceNumSlots = Math.floor(execAmount * taskReq.numParts / 
taskReq.amount).toInt
 if (resourceNumSlots < numSlots) {
   numSlots = resourceNumSlots
   limitingResourceName = taskReq.resourceName
@@ -2809,11 +2812,19 @@ object SparkContext extends Logging {
   // large enough if any task resources were specified.
   taskResourceRequirements.foreach { taskReq =>
 val execAmount = executorResourcesAndAmounts(taskReq.resourceName)
-if (taskReq.amount * numSlots < execAmount) {
+if ((numSlots * taskReq.amount / taskReq.numParts) < execAmount) {
+  val taskReqStr = if (taskReq.numParts > 1) {
+s"${taskReq.amount}/${taskReq.numParts}"
+  } else {
+s"${taskReq.amount}"
+  }
+  val resourceNumSlots = Math.floor(execAmount * 
taskReq.numParts/taskReq.amount).toInt
   val message = s"The configuration of resource: 
${taskReq.resourceName} " +
-s"(exec = ${execAmount}, task = ${taskReq.amount}) will result in 
wasted " +
-s"resources due to resource ${limitingResourceName} limiting the 
number of " +
-s"runnable tasks per executor to: ${numSlots}. Please adjust your 
configuration."
+s"(exec = ${execAmount}, task = ${taskReqStr}, " +
+s"runnable tasks = ${resourceNumSlots}) will " +
+s"result in wasted resources due to resource 
${limitingResourceName} limiting the " +
+s"number of runnable tasks per executor to: ${numSlots}. Please 
adjust " +
+s"your configuration."
   if (Utils.isTesting) {
 throw new SparkException(message)
   } else {
diff --git 
a/core/src/main/scala/org/apache/spark/deploy/master/WorkerInfo.scala