svn commit: r28311 - in /dev/spark/2.4.0-SNAPSHOT-2018_07_24_00_02-13a67b0-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-07-24 Thread pwendell
Author: pwendell Date: Tue Jul 24 07:17:47 2018 New Revision: 28311 Log: Apache Spark 2.4.0-SNAPSHOT-2018_07_24_00_02-13a67b0 docs [This commit notification would consist of 1469 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.] ---

spark git commit: [SPARK-22499][FOLLOWUP][SQL] Reduce input string expressions for Least and Greatest to reduce time in its test

2018-07-24 Thread gurwls223
Repository: spark Updated Branches: refs/heads/master 13a67b070 -> 3d5c61e5f [SPARK-22499][FOLLOWUP][SQL] Reduce input string expressions for Least and Greatest to reduce time in its test ## What changes were proposed in this pull request? It's minor and trivial but looks 2000 input is good

spark git commit: [SPARK-22499][FOLLOWUP][SQL] Reduce input string expressions for Least and Greatest to reduce time in its test

2018-07-24 Thread gurwls223
Repository: spark Updated Branches: refs/heads/branch-2.3 f5bc94861 -> 740a23d7d [SPARK-22499][FOLLOWUP][SQL] Reduce input string expressions for Least and Greatest to reduce time in its test ## What changes were proposed in this pull request? It's minor and trivial but looks 2000 input is g

spark git commit: [SPARK-22499][FOLLOWUP][SQL] Reduce input string expressions for Least and Greatest to reduce time in its test

2018-07-24 Thread gurwls223
Repository: spark Updated Branches: refs/heads/branch-2.2 144426cff -> f339e2fd7 [SPARK-22499][FOLLOWUP][SQL] Reduce input string expressions for Least and Greatest to reduce time in its test ## What changes were proposed in this pull request? It's minor and trivial but looks 2000 input is g

svn commit: r28315 - in /dev/spark/2.3.3-SNAPSHOT-2018_07_24_06_01-740a23d-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-07-24 Thread pwendell
Author: pwendell Date: Tue Jul 24 13:16:23 2018 New Revision: 28315 Log: Apache Spark 2.3.3-SNAPSHOT-2018_07_24_06_01-740a23d docs [This commit notification would consist of 1443 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.] ---

svn commit: r28322 - in /dev/spark/2.4.0-SNAPSHOT-2018_07_24_08_02-3d5c61e-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-07-24 Thread pwendell
Author: pwendell Date: Tue Jul 24 15:16:47 2018 New Revision: 28322 Log: Apache Spark 2.4.0-SNAPSHOT-2018_07_24_08_02-3d5c61e docs [This commit notification would consist of 1469 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.] ---

spark git commit: [SPARK-23325] Use InternalRow when reading with DataSourceV2.

2018-07-24 Thread lixiao
Repository: spark Updated Branches: refs/heads/master 3d5c61e5f -> 9d27541a8 [SPARK-23325] Use InternalRow when reading with DataSourceV2. ## What changes were proposed in this pull request? This updates the DataSourceV2 API to use InternalRow instead of Row for the default case with no scan

spark git commit: [SPARK-24812][SQL] Last Access Time in the table description is not valid

2018-07-24 Thread lixiao
Repository: spark Updated Branches: refs/heads/master 9d27541a8 -> d4a277f0c [SPARK-24812][SQL] Last Access Time in the table description is not valid ## What changes were proposed in this pull request? Last Access Time will always displayed wrong date Thu Jan 01 05:30:00 IST 1970 when user

svn commit: r28323 - in /dev/spark/2.4.0-SNAPSHOT-2018_07_24_12_01-d4a277f-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-07-24 Thread pwendell
Author: pwendell Date: Tue Jul 24 19:16:08 2018 New Revision: 28323 Log: Apache Spark 2.4.0-SNAPSHOT-2018_07_24_12_01-d4a277f docs [This commit notification would consist of 1469 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.] ---

spark git commit: [SPARK-24895] Remove spotbugs plugin

2018-07-24 Thread yhuai
Repository: spark Updated Branches: refs/heads/master d4a277f0c -> fc21f192a [SPARK-24895] Remove spotbugs plugin ## What changes were proposed in this pull request? Spotbugs maven plugin was a recently added plugin before 2.4.0 snapshot artifacts were broken. To ensure it does not affect t

spark git commit: [SPARK-24908][R][STYLE] removing spaces to make lintr happy

2018-07-24 Thread dbtsai
Repository: spark Updated Branches: refs/heads/master fc21f192a -> 3efdf3532 [SPARK-24908][R][STYLE] removing spaces to make lintr happy ## What changes were proposed in this pull request? during my travails in porting spark builds to run on our centos worker, i managed to recreate (as best

svn commit: r28327 - in /dev/spark/2.4.0-SNAPSHOT-2018_07_24_16_01-fc21f19-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-07-24 Thread pwendell
Author: pwendell Date: Tue Jul 24 23:16:06 2018 New Revision: 28327 Log: Apache Spark 2.4.0-SNAPSHOT-2018_07_24_16_01-fc21f19 docs [This commit notification would consist of 1469 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.] ---

spark git commit: [SPARK-24297][CORE] Fetch-to-disk by default for > 2gb

2018-07-24 Thread jshao
Repository: spark Updated Branches: refs/heads/master 3efdf3532 -> 15fff7903 [SPARK-24297][CORE] Fetch-to-disk by default for > 2gb Fetch-to-mem is guaranteed to fail if the message is bigger than 2 GB, so we might as well use fetch-to-disk in that case. The message includes some metadata in

spark git commit: [SPARK-24891][SQL] Fix HandleNullInputsForUDF rule

2018-07-24 Thread lixiao
Repository: spark Updated Branches: refs/heads/master 15fff7903 -> c26b09216 [SPARK-24891][SQL] Fix HandleNullInputsForUDF rule ## What changes were proposed in this pull request? The HandleNullInputsForUDF would always add a new `If` node every time it is applied. That would cause a differe

spark git commit: [SPARK-24891][SQL] Fix HandleNullInputsForUDF rule

2018-07-24 Thread lixiao
Repository: spark Updated Branches: refs/heads/branch-2.3 740a23d7d -> 6a5999286 [SPARK-24891][SQL] Fix HandleNullInputsForUDF rule The HandleNullInputsForUDF would always add a new `If` node every time it is applied. That would cause a difference between the same plan being analyzed once an

svn commit: r28337 - in /dev/spark/2.4.0-SNAPSHOT-2018_07_24_20_02-c26b092-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-07-24 Thread pwendell
Author: pwendell Date: Wed Jul 25 03:17:00 2018 New Revision: 28337 Log: Apache Spark 2.4.0-SNAPSHOT-2018_07_24_20_02-c26b092 docs [This commit notification would consist of 1469 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.] ---

spark git commit: [SPARK-24890][SQL] Short circuiting the `if` condition when `trueValue` and `falseValue` are the same

2018-07-24 Thread lixiao
Repository: spark Updated Branches: refs/heads/master c26b09216 -> d4c341589 [SPARK-24890][SQL] Short circuiting the `if` condition when `trueValue` and `falseValue` are the same ## What changes were proposed in this pull request? When `trueValue` and `falseValue` are semantic equivalence, t

spark git commit: [SPARK-23957][SQL] Sorts in subqueries are redundant and can be removed

2018-07-24 Thread lixiao
Repository: spark Updated Branches: refs/heads/master d4c341589 -> afb062753 [SPARK-23957][SQL] Sorts in subqueries are redundant and can be removed ## What changes were proposed in this pull request? Thanks to henryr for the original idea at https://github.com/apache/spark/pull/21049 Descri

spark git commit: [SPARK-19018][SQL] Add support for custom encoding on csv writer

2018-07-24 Thread gurwls223
Repository: spark Updated Branches: refs/heads/master afb062753 -> 78e0a725e [SPARK-19018][SQL] Add support for custom encoding on csv writer ## What changes were proposed in this pull request? Add support for custom encoding on csv writer, see https://issues.apache.org/jira/browse/SPARK-190

spark git commit: [SPARK-18874][SQL][FOLLOW-UP] Improvement type mismatched message

2018-07-24 Thread lixiao
Repository: spark Updated Branches: refs/heads/master 78e0a725e -> 7a5fd4a91 [SPARK-18874][SQL][FOLLOW-UP] Improvement type mismatched message ## What changes were proposed in this pull request? Improvement `IN` predicate type mismatched message: ```sql Mismatched columns: [(, t, 4, ., `, t, 4