spark git commit: [SPARK-16848][SQL] Check schema validation for user-specified schema in jdbc and table APIs

2017-01-11 Thread lixiao
Repository: spark Updated Branches: refs/heads/master 43fa21b3e -> 24100f162 [SPARK-16848][SQL] Check schema validation for user-specified schema in jdbc and table APIs ## What changes were proposed in this pull request? This PR proposes to throw an exception for both jdbc APIs when user

spark git commit: [SPARK-19132][SQL] Add test cases for row size estimation and aggregate estimation

2017-01-11 Thread rxin
Repository: spark Updated Branches: refs/heads/master 66fe819ad -> 43fa21b3e [SPARK-19132][SQL] Add test cases for row size estimation and aggregate estimation ## What changes were proposed in this pull request? In this pr, we add more test cases for project and aggregate estimation. ##

spark git commit: [SPARK-19149][SQL] Follow-up: simplify cache implementation.

2017-01-11 Thread rxin
Repository: spark Updated Branches: refs/heads/master 30a07071f -> 66fe819ad [SPARK-19149][SQL] Follow-up: simplify cache implementation. ## What changes were proposed in this pull request? This patch simplifies slightly the logical plan statistics cache implementation, as discussed in

spark git commit: [SPARK-18801][SQL] Support resolve a nested view

2017-01-11 Thread hvanhovell
Repository: spark Updated Branches: refs/heads/master 3bc2eff88 -> 30a07071f [SPARK-18801][SQL] Support resolve a nested view ## What changes were proposed in this pull request? We should be able to resolve a nested view. The main advantage is that if you update an underlying view, the

spark git commit: [SPARK-17568][CORE][DEPLOY] Add spark-submit option to override ivy settings used to resolve packages/artifacts

2017-01-11 Thread vanzin
Repository: spark Updated Branches: refs/heads/master d749c0667 -> 3bc2eff88 [SPARK-17568][CORE][DEPLOY] Add spark-submit option to override ivy settings used to resolve packages/artifacts ## What changes were proposed in this pull request? Adding option in spark-submit to allow overriding

spark git commit: [SPARK-19130][SPARKR] Support setting literal value as column implicitly

2017-01-11 Thread shivaram
Repository: spark Updated Branches: refs/heads/branch-2.1 1022049c7 -> 82fcc1330 [SPARK-19130][SPARKR] Support setting literal value as column implicitly ## What changes were proposed in this pull request? ``` df$foo <- 1 ``` instead of ``` df$foo <- lit(1) ``` ## How was this patch

spark git commit: [SPARK-19130][SPARKR] Support setting literal value as column implicitly

2017-01-11 Thread shivaram
Repository: spark Updated Branches: refs/heads/master 4239a1081 -> d749c0667 [SPARK-19130][SPARKR] Support setting literal value as column implicitly ## What changes were proposed in this pull request? ``` df$foo <- 1 ``` instead of ``` df$foo <- lit(1) ``` ## How was this patch tested?

spark-website git commit: First Java example does not work with recent Spark version (see https://issues.apache.org/jira/browse/SPARK-19156)

2017-01-11 Thread srowen
Repository: spark-website Updated Branches: refs/heads/asf-site 46a7a8027 -> e95223137 First Java example does not work with recent Spark version (see https://issues.apache.org/jira/browse/SPARK-19156) Project: http://git-wip-us.apache.org/repos/asf/spark-website/repo Commit:

spark git commit: [SPARK-19021][YARN] Generailize HDFSCredentialProvider to support non HDFS security filesystems

2017-01-11 Thread tgraves
Repository: spark Updated Branches: refs/heads/master a61551356 -> 4239a1081 [SPARK-19021][YARN] Generailize HDFSCredentialProvider to support non HDFS security filesystems Currently Spark can only get token renewal interval from security HDFS (hdfs://), if Spark runs with other security