spark git commit: [SPARK-17458][SQL] Alias specified for aggregates in a pivot are not honored

2016-09-15 Thread hvanhovell
Repository: spark Updated Branches: refs/heads/master 1202075c9 -> b72486f82 [SPARK-17458][SQL] Alias specified for aggregates in a pivot are not honored ## What changes were proposed in this pull request? This change preserves aliases that are given for pivot aggregations ## How was this

spark git commit: [SPARK-17484] Prevent invalid block locations from being reported after put() exceptions

2016-09-15 Thread joshrosen
Repository: spark Updated Branches: refs/heads/branch-2.0 0169c2edc -> 9c23f4408 [SPARK-17484] Prevent invalid block locations from being reported after put() exceptions ## What changes were proposed in this pull request? If a BlockManager `put()` call failed after the BlockManagerMaster

spark git commit: [SPARK-17484] Prevent invalid block locations from being reported after put() exceptions

2016-09-15 Thread joshrosen
Repository: spark Updated Branches: refs/heads/master a6b818200 -> 1202075c9 [SPARK-17484] Prevent invalid block locations from being reported after put() exceptions ## What changes were proposed in this pull request? If a BlockManager `put()` call failed after the BlockManagerMaster was

spark git commit: [SPARK-17364][SQL] Antlr lexer wrongly treats full qualified identifier as a decimal number token when parsing SQL string

2016-09-15 Thread hvanhovell
Repository: spark Updated Branches: refs/heads/branch-2.0 abb89c42e -> 0169c2edc [SPARK-17364][SQL] Antlr lexer wrongly treats full qualified identifier as a decimal number token when parsing SQL string ## What changes were proposed in this pull request? The Antlr lexer we use to tokenize a

spark git commit: [SPARK-17364][SQL] Antlr lexer wrongly treats full qualified identifier as a decimal number token when parsing SQL string

2016-09-15 Thread hvanhovell
Repository: spark Updated Branches: refs/heads/master fe767395f -> a6b818200 [SPARK-17364][SQL] Antlr lexer wrongly treats full qualified identifier as a decimal number token when parsing SQL string ## What changes were proposed in this pull request? The Antlr lexer we use to tokenize a SQL

spark git commit: [SPARK-17429][SQL] use ImplicitCastInputTypes with function Length

2016-09-15 Thread hvanhovell
Repository: spark Updated Branches: refs/heads/master d403562eb -> fe767395f [SPARK-17429][SQL] use ImplicitCastInputTypes with function Length ## What changes were proposed in this pull request? select length(11); select length(2.0); these sql will return errors, but hive is ok. this PR will

spark git commit: [SPARK-17114][SQL] Fix aggregates grouped by literals with empty input

2016-09-15 Thread hvanhovell
Repository: spark Updated Branches: refs/heads/branch-2.0 e77a437d2 -> 62ab53658 [SPARK-17114][SQL] Fix aggregates grouped by literals with empty input ## What changes were proposed in this pull request? This PR fixes an issue with aggregates that have an empty input, and use a literals as

spark git commit: [SPARK-17547] Ensure temp shuffle data file is cleaned up after error

2016-09-15 Thread joshrosen
Repository: spark Updated Branches: refs/heads/branch-1.6 a447cd888 -> 8646b84fb [SPARK-17547] Ensure temp shuffle data file is cleaned up after error SPARK-8029 (#9610) modified shuffle writers to first stage their data to a temporary file in the same directory as the final destination file

spark git commit: [SPARK-17114][SQL] Fix aggregates grouped by literals with empty input

2016-09-15 Thread hvanhovell
Repository: spark Updated Branches: refs/heads/master 5b8f7377d -> d403562eb [SPARK-17114][SQL] Fix aggregates grouped by literals with empty input ## What changes were proposed in this pull request? This PR fixes an issue with aggregates that have an empty input, and use a literals as their

spark git commit: [SPARK-17547] Ensure temp shuffle data file is cleaned up after error

2016-09-15 Thread joshrosen
Repository: spark Updated Branches: refs/heads/branch-2.0 a09c258c9 -> e77a437d2 [SPARK-17547] Ensure temp shuffle data file is cleaned up after error SPARK-8029 (#9610) modified shuffle writers to first stage their data to a temporary file in the same directory as the final destination file

spark git commit: [SPARK-17547] Ensure temp shuffle data file is cleaned up after error

2016-09-15 Thread joshrosen
Repository: spark Updated Branches: refs/heads/master 0ad8eeb4d -> 5b8f7377d [SPARK-17547] Ensure temp shuffle data file is cleaned up after error SPARK-8029 (#9610) modified shuffle writers to first stage their data to a temporary file in the same directory as the final destination file and

spark git commit: [SPARK-17379][BUILD] Upgrade netty-all to 4.0.41 final for bug fixes

2016-09-15 Thread zsxwing
Repository: spark Updated Branches: refs/heads/master b47927814 -> 0ad8eeb4d [SPARK-17379][BUILD] Upgrade netty-all to 4.0.41 final for bug fixes ## What changes were proposed in this pull request? Upgrade netty-all to latest in the 4.0.x line which is 4.0.41, mentions several bug fixes and

spark git commit: [SPARK-17451][CORE] CoarseGrainedExecutorBackend should inform driver before self-kill

2016-09-15 Thread zsxwing
Repository: spark Updated Branches: refs/heads/master 2ad276954 -> b47927814 [SPARK-17451][CORE] CoarseGrainedExecutorBackend should inform driver before self-kill ## What changes were proposed in this pull request? Jira : https://issues.apache.org/jira/browse/SPARK-17451

spark git commit: [SPARK-17317][SPARKR] Add SparkR vignette to branch 2.0

2016-09-15 Thread shivaram
Repository: spark Updated Branches: refs/heads/branch-2.0 5c2bc8360 -> a09c258c9 [SPARK-17317][SPARKR] Add SparkR vignette to branch 2.0 ## What changes were proposed in this pull request? This PR adds SparkR vignette to branch 2.0, which works as a friendly guidance going through the

spark git commit: [SPARK-17406][BUILD][HOTFIX] MiMa excludes fix

2016-09-15 Thread srowen
Repository: spark Updated Branches: refs/heads/master 71a65825c -> 2ad276954 [SPARK-17406][BUILD][HOTFIX] MiMa excludes fix ## What changes were proposed in this pull request? Following https://github.com/apache/spark/pull/14969 for some reason the MiMa excludes weren't complete, but still

spark git commit: [SPARK-17536][SQL] Minor performance improvement to JDBC batch inserts

2016-09-15 Thread srowen
Repository: spark Updated Branches: refs/heads/master ad79fc0a8 -> 71a65825c [SPARK-17536][SQL] Minor performance improvement to JDBC batch inserts ## What changes were proposed in this pull request? Optimize a while loop during batch inserts ## How was this patch tested? Unit tests were

spark git commit: [SPARK-17406][WEB UI] limit timeline executor events

2016-09-15 Thread srowen
Repository: spark Updated Branches: refs/heads/master 647ee05e5 -> ad79fc0a8 [SPARK-17406][WEB UI] limit timeline executor events ## What changes were proposed in this pull request? The job page will be too slow to open when there are thousands of executor events(added or removed). I found

spark git commit: [SPARK-17521] Error when I use sparkContext.makeRDD(Seq())

2016-09-15 Thread srowen
Repository: spark Updated Branches: refs/heads/master f893e2625 -> 647ee05e5 [SPARK-17521] Error when I use sparkContext.makeRDD(Seq()) ## What changes were proposed in this pull request? when i use sc.makeRDD below ``` val data3 = sc.makeRDD(Seq()) println(data3.partitions.length) ``` I

spark git commit: [SPARK-17521] Error when I use sparkContext.makeRDD(Seq())

2016-09-15 Thread srowen
Repository: spark Updated Branches: refs/heads/branch-2.0 bb2bdb440 -> 5c2bc8360 [SPARK-17521] Error when I use sparkContext.makeRDD(Seq()) ## What changes were proposed in this pull request? when i use sc.makeRDD below ``` val data3 = sc.makeRDD(Seq()) println(data3.partitions.length) ```

spark git commit: [SPARK-17524][TESTS] Use specified spark.buffer.pageSize

2016-09-15 Thread srowen
Repository: spark Updated Branches: refs/heads/master d15b4f90e -> f893e2625 [SPARK-17524][TESTS] Use specified spark.buffer.pageSize ## What changes were proposed in this pull request? This PR has the appendRowUntilExceedingPageSize test in RowBasedKeyValueBatchSuite use whatever

spark git commit: [SPARK-17507][ML][MLLIB] check weight vector size in ANN

2016-09-15 Thread srowen
Repository: spark Updated Branches: refs/heads/master 6a6adb167 -> d15b4f90e [SPARK-17507][ML][MLLIB] check weight vector size in ANN ## What changes were proposed in this pull request? as the TODO described, check weight vector size and if wrong throw exception. ## How was this patch

spark git commit: [SPARK-17440][SPARK-17441] Fixed Multiple Bugs in ALTER TABLE

2016-09-15 Thread wenchen
Repository: spark Updated Branches: refs/heads/master bb3229436 -> 6a6adb167 [SPARK-17440][SPARK-17441] Fixed Multiple Bugs in ALTER TABLE ### What changes were proposed in this pull request? For the following `ALTER TABLE` DDL, we should issue an exception when the target table is a `VIEW`: