spark git commit: [SPARK-22779][SQL] Resolve default values for fallback configs.

2017-12-13 Thread lixiao
Repository: spark Updated Branches: refs/heads/master f8c7c1f21 -> c3dd2a26d [SPARK-22779][SQL] Resolve default values for fallback configs. SQLConf allows some callers to define a custom default value for configs, and that complicates a little bit the handling of fallback config entries,

[2/2] spark git commit: [SPARK-22732] Add Structured Streaming APIs to DataSourceV2

2017-12-13 Thread zsxwing
[SPARK-22732] Add Structured Streaming APIs to DataSourceV2 ## What changes were proposed in this pull request? This PR provides DataSourceV2 API support for structured streaming, including new pieces needed to support continuous processing [SPARK-20928]. High level summary: - DataSourceV2

[1/2] spark git commit: [SPARK-22732] Add Structured Streaming APIs to DataSourceV2

2017-12-13 Thread zsxwing
Repository: spark Updated Branches: refs/heads/master 1e44dd004 -> f8c7c1f21 http://git-wip-us.apache.org/repos/asf/spark/blob/f8c7c1f2/sql/core/src/test/scala/org/apache/spark/sql/streaming/FileStreamSourceSuite.scala -- diff

spark git commit: [SPARK-3181][ML] Implement huber loss for LinearRegression.

2017-12-13 Thread yliang
Repository: spark Updated Branches: refs/heads/master 2a29a60da -> 1e44dd004 [SPARK-3181][ML] Implement huber loss for LinearRegression. ## What changes were proposed in this pull request? MLlib ```LinearRegression``` supports _huber_ loss addition to _leastSquares_ loss. The huber loss

svn commit: r23723 - in /dev/spark/2.3.0-SNAPSHOT-2017_12_13_20_01-2a29a60-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2017-12-13 Thread pwendell
Author: pwendell Date: Thu Dec 14 04:14:32 2017 New Revision: 23723 Log: Apache Spark 2.3.0-SNAPSHOT-2017_12_13_20_01-2a29a60 docs [This commit notification would consist of 1407 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.]

[1/2] spark git commit: Revert "[SPARK-22600][SQL][FOLLOW-UP] Fix a compilation error in TPCDS q75/q77"

2017-12-13 Thread wenchen
Repository: spark Updated Branches: refs/heads/master ef9299965 -> 2a29a60da Revert "[SPARK-22600][SQL][FOLLOW-UP] Fix a compilation error in TPCDS q75/q77" This reverts commit ef92999653f0e2a47752379a867647445d849aab. Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit:

[2/2] spark git commit: Revert "[SPARK-22600][SQL] Fix 64kb limit for deeply nested expressions under wholestage codegen"

2017-12-13 Thread wenchen
Revert "[SPARK-22600][SQL] Fix 64kb limit for deeply nested expressions under wholestage codegen" This reverts commit c7d0148615c921dca782ee3785b5d0cd59e42262. Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/2a29a60d Tree:

svn commit: r23721 - in /dev/spark/2.3.0-SNAPSHOT-2017_12_13_16_01-ef92999-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2017-12-13 Thread pwendell
Author: pwendell Date: Thu Dec 14 00:14:38 2017 New Revision: 23721 Log: Apache Spark 2.3.0-SNAPSHOT-2017_12_13_16_01-ef92999 docs [This commit notification would consist of 1407 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.]

spark git commit: [SPARK-22600][SQL][FOLLOW-UP] Fix a compilation error in TPCDS q75/q77

2017-12-13 Thread lixiao
Repository: spark Updated Branches: refs/heads/master a83e8e6c2 -> ef9299965 [SPARK-22600][SQL][FOLLOW-UP] Fix a compilation error in TPCDS q75/q77 ## What changes were proposed in this pull request? This pr fixed a compilation error of TPCDS `q75`/`q77` caused by #19813; ```

spark git commit: [SPARK-22764][CORE] Fix flakiness in SparkContextSuite.

2017-12-13 Thread irashid
Repository: spark Updated Branches: refs/heads/master ba0e79f57 -> a83e8e6c2 [SPARK-22764][CORE] Fix flakiness in SparkContextSuite. Use a semaphore to synchronize the tasks with the listener code that is trying to cancel the job or stage, so that the listener won't try to cancel a job or

spark git commit: [SPARK-22772][SQL] Use splitExpressionsWithCurrentInputs to split codes in elt

2017-12-13 Thread lixiao
Repository: spark Updated Branches: refs/heads/master 0bdb4e516 -> ba0e79f57 [SPARK-22772][SQL] Use splitExpressionsWithCurrentInputs to split codes in elt ## What changes were proposed in this pull request? In SPARK-22550 which fixes 64KB JVM bytecode limit problem with elt,

spark git commit: [SPARK-22574][MESOS][SUBMIT] Check submission request parameters

2017-12-13 Thread vanzin
Repository: spark Updated Branches: refs/heads/master 1abcbed67 -> 0bdb4e516 [SPARK-22574][MESOS][SUBMIT] Check submission request parameters ## What changes were proposed in this pull request? PR closed with all the comments -> https://github.com/apache/spark/pull/19793 It solves the

spark git commit: [SPARK-22574][MESOS][SUBMIT] Check submission request parameters

2017-12-13 Thread vanzin
Repository: spark Updated Branches: refs/heads/branch-2.2 0230515a2 -> b4f4be396 [SPARK-22574][MESOS][SUBMIT] Check submission request parameters ## What changes were proposed in this pull request? PR closed with all the comments -> https://github.com/apache/spark/pull/19793 It solves the

svn commit: r23716 - in /dev/spark/2.3.0-SNAPSHOT-2017_12_13_12_01-1abcbed-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2017-12-13 Thread pwendell
Author: pwendell Date: Wed Dec 13 20:14:36 2017 New Revision: 23716 Log: Apache Spark 2.3.0-SNAPSHOT-2017_12_13_12_01-1abcbed docs [This commit notification would consist of 1407 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.]

spark git commit: [SPARK-22763][CORE] SHS: Ignore unknown events and parse through the file

2017-12-13 Thread lixiao
Repository: spark Updated Branches: refs/heads/master c5a4701ac -> 1abcbed67 [SPARK-22763][CORE] SHS: Ignore unknown events and parse through the file ## What changes were proposed in this pull request? While spark code changes, there are new events in event log: #19649 And we used to

spark git commit: Revert "[SPARK-21417][SQL] Infer join conditions using propagated constraints"

2017-12-13 Thread lixiao
Repository: spark Updated Branches: refs/heads/master 8eb5609d8 -> c5a4701ac Revert "[SPARK-21417][SQL] Infer join conditions using propagated constraints" This reverts commit 6ac57fd0d1c82b834eb4bf0dd57596b92a99d6de. Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit:

spark git commit: [SPARK-22754][DEPLOY] Check whether spark.executor.heartbeatInterval bigger…

2017-12-13 Thread vanzin
Repository: spark Updated Branches: refs/heads/master f6bcd3e53 -> 8eb5609d8 [SPARK-22754][DEPLOY] Check whether spark.executor.heartbeatInterval bigger… … than spark.network.timeout or not ## What changes were proposed in this pull request? If spark.executor.heartbeatInterval bigger

spark git commit: [SPARK-22767][SQL] use ctx.addReferenceObj in InSet and ScalaUDF

2017-12-13 Thread wenchen
Repository: spark Updated Branches: refs/heads/master 58f7c825a -> f6bcd3e53 [SPARK-22767][SQL] use ctx.addReferenceObj in InSet and ScalaUDF ## What changes were proposed in this pull request? We should not operate on `references` directly in `Expression.doGenCode`, instead we should use

svn commit: r23711 - in /dev/spark/2.3.0-SNAPSHOT-2017_12_13_08_02-58f7c82-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2017-12-13 Thread pwendell
Author: pwendell Date: Wed Dec 13 16:22:46 2017 New Revision: 23711 Log: Apache Spark 2.3.0-SNAPSHOT-2017_12_13_08_02-58f7c82 docs [This commit notification would consist of 1407 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.]

spark git commit: [SPARK-20849][DOC][FOLLOWUP] Document R DecisionTree - Link Classification Example

2017-12-13 Thread srowen
Repository: spark Updated Branches: refs/heads/master 7453ab024 -> 58f7c825a [SPARK-20849][DOC][FOLLOWUP] Document R DecisionTree - Link Classification Example ## What changes were proposed in this pull request? in https://github.com/apache/spark/pull/18067, only the regression example is

svn commit: r23709 - in /dev/spark/2.3.0-SNAPSHOT-2017_12_13_04_01-7453ab0-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2017-12-13 Thread pwendell
Author: pwendell Date: Wed Dec 13 12:19:38 2017 New Revision: 23709 Log: Apache Spark 2.3.0-SNAPSHOT-2017_12_13_04_01-7453ab0 docs [This commit notification would consist of 1407 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.]

spark git commit: [SPARK-22745][SQL] read partition stats from Hive

2017-12-13 Thread wenchen
Repository: spark Updated Branches: refs/heads/master 682eb4f2e -> 7453ab024 [SPARK-22745][SQL] read partition stats from Hive ## What changes were proposed in this pull request? Currently Spark can read table stats (e.g. `totalSize, numRows`) from Hive, we can also support to read

svn commit: r23703 - in /dev/spark/2.3.0-SNAPSHOT-2017_12_13_00_01-682eb4f-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2017-12-13 Thread pwendell
Author: pwendell Date: Wed Dec 13 08:15:04 2017 New Revision: 23703 Log: Apache Spark 2.3.0-SNAPSHOT-2017_12_13_00_01-682eb4f docs [This commit notification would consist of 1407 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.]