spark git commit: [SPARK-18444][SPARKR] SparkR running in yarn-cluster mode should not download Spark package.

2016-11-22 Thread yliang
Repository: spark Updated Branches: refs/heads/master ebeb0830a -> acb971577 [SPARK-18444][SPARKR] SparkR running in yarn-cluster mode should not download Spark package. ## What changes were proposed in this pull request? When running SparkR job in yarn-cluster mode, it will download Spark pa

spark git commit: [SPARK-18444][SPARKR] SparkR running in yarn-cluster mode should not download Spark package.

2016-11-22 Thread yliang
Repository: spark Updated Branches: refs/heads/branch-2.1 aaa2a173a -> c70214075 [SPARK-18444][SPARKR] SparkR running in yarn-cluster mode should not download Spark package. ## What changes were proposed in this pull request? When running SparkR job in yarn-cluster mode, it will download Spar

spark git commit: [SPARK-18444][SPARKR] SparkR running in yarn-cluster mode should not download Spark package.

2016-11-22 Thread yliang
Repository: spark Updated Branches: refs/heads/branch-2.0 9dad3a7b0 -> a37238b06 [SPARK-18444][SPARKR] SparkR running in yarn-cluster mode should not download Spark package. ## What changes were proposed in this pull request? When running SparkR job in yarn-cluster mode, it will download Spar

spark git commit: [SPARK-18514][DOCS] Fix the markdown for `Note:`/`NOTE:`/`Note that` across R API documentation

2016-11-22 Thread srowen
Repository: spark Updated Branches: refs/heads/master acb971577 -> 4922f9cdc [SPARK-18514][DOCS] Fix the markdown for `Note:`/`NOTE:`/`Note that` across R API documentation ## What changes were proposed in this pull request? It seems in R, there are - `Note:` - `NOTE:` - `Note that` This P

spark git commit: [SPARK-18514][DOCS] Fix the markdown for `Note:`/`NOTE:`/`Note that` across R API documentation

2016-11-22 Thread srowen
Repository: spark Updated Branches: refs/heads/branch-2.1 c70214075 -> 63aa01ffe [SPARK-18514][DOCS] Fix the markdown for `Note:`/`NOTE:`/`Note that` across R API documentation ## What changes were proposed in this pull request? It seems in R, there are - `Note:` - `NOTE:` - `Note that` Th

spark git commit: [SPARK-18447][DOCS] Fix the markdown for `Note:`/`NOTE:`/`Note that` across Python API documentation

2016-11-22 Thread srowen
Repository: spark Updated Branches: refs/heads/master 4922f9cdc -> 933a6548d [SPARK-18447][DOCS] Fix the markdown for `Note:`/`NOTE:`/`Note that` across Python API documentation ## What changes were proposed in this pull request? It seems in Python, there are - `Note:` - `NOTE:` - `Note tha

spark git commit: [SPARK-18447][DOCS] Fix the markdown for `Note:`/`NOTE:`/`Note that` across Python API documentation

2016-11-22 Thread srowen
Repository: spark Updated Branches: refs/heads/branch-2.1 63aa01ffe -> 36cd10d19 [SPARK-18447][DOCS] Fix the markdown for `Note:`/`NOTE:`/`Note that` across Python API documentation ## What changes were proposed in this pull request? It seems in Python, there are - `Note:` - `NOTE:` - `Note

spark git commit: [SPARK-18519][SQL] map type can not be used in EqualTo

2016-11-22 Thread hvanhovell
Repository: spark Updated Branches: refs/heads/branch-2.1 36cd10d19 -> 0e60e4b88 [SPARK-18519][SQL] map type can not be used in EqualTo ## What changes were proposed in this pull request? Technically map type is not orderable, but can be used in equality comparison. However, due to the limit

spark git commit: [SPARK-18519][SQL] map type can not be used in EqualTo

2016-11-22 Thread hvanhovell
Repository: spark Updated Branches: refs/heads/master 933a6548d -> bb152cdfb [SPARK-18519][SQL] map type can not be used in EqualTo ## What changes were proposed in this pull request? Technically map type is not orderable, but can be used in equality comparison. However, due to the limitatio

spark git commit: [SPARK-18504][SQL] Scalar subquery with extra group by columns returning incorrect result

2016-11-22 Thread hvanhovell
Repository: spark Updated Branches: refs/heads/branch-2.1 0e60e4b88 -> 0e624e990 [SPARK-18504][SQL] Scalar subquery with extra group by columns returning incorrect result ## What changes were proposed in this pull request? This PR blocks an incorrect result scenario in scalar subquery where

spark git commit: [SPARK-18504][SQL] Scalar subquery with extra group by columns returning incorrect result

2016-11-22 Thread hvanhovell
Repository: spark Updated Branches: refs/heads/master bb152cdfb -> 45ea46b7b [SPARK-18504][SQL] Scalar subquery with extra group by columns returning incorrect result ## What changes were proposed in this pull request? This PR blocks an incorrect result scenario in scalar subquery where ther

spark git commit: [SPARK-18504][SQL] Scalar subquery with extra group by columns returning incorrect result

2016-11-22 Thread hvanhovell
Repository: spark Updated Branches: refs/heads/branch-2.0 a37238b06 -> 072f4c518 [SPARK-18504][SQL] Scalar subquery with extra group by columns returning incorrect result ## What changes were proposed in this pull request? This PR blocks an incorrect result scenario in scalar subquery where

spark git commit: [SPARK-18507][SQL] HiveExternalCatalog.listPartitions should only call getTable once

2016-11-22 Thread andrewor14
Repository: spark Updated Branches: refs/heads/master 45ea46b7b -> 702cd403f [SPARK-18507][SQL] HiveExternalCatalog.listPartitions should only call getTable once ## What changes were proposed in this pull request? HiveExternalCatalog.listPartitions should only call `getTable` once, instead o

spark git commit: [SPARK-18507][SQL] HiveExternalCatalog.listPartitions should only call getTable once

2016-11-22 Thread andrewor14
Repository: spark Updated Branches: refs/heads/branch-2.1 0e624e990 -> fa360134d [SPARK-18507][SQL] HiveExternalCatalog.listPartitions should only call getTable once ## What changes were proposed in this pull request? HiveExternalCatalog.listPartitions should only call `getTable` once, inste

spark git commit: [SPARK-18465] Add 'IF EXISTS' clause to 'UNCACHE' to not throw exceptions when table doesn't exist

2016-11-22 Thread hvanhovell
Repository: spark Updated Branches: refs/heads/branch-2.1 fa360134d -> fb2ea54a6 [SPARK-18465] Add 'IF EXISTS' clause to 'UNCACHE' to not throw exceptions when table doesn't exist ## What changes were proposed in this pull request? While this behavior is debatable, consider the following use

spark git commit: [SPARK-18465] Add 'IF EXISTS' clause to 'UNCACHE' to not throw exceptions when table doesn't exist

2016-11-22 Thread hvanhovell
Repository: spark Updated Branches: refs/heads/master 702cd403f -> bdc8153e8 [SPARK-18465] Add 'IF EXISTS' clause to 'UNCACHE' to not throw exceptions when table doesn't exist ## What changes were proposed in this pull request? While this behavior is debatable, consider the following use cas

spark git commit: [SPARK-18373][SPARK-18529][SS][KAFKA] Make failOnDataLoss=false work with Spark jobs

2016-11-22 Thread tdas
Repository: spark Updated Branches: refs/heads/master bdc8153e8 -> 2fd101b2f [SPARK-18373][SPARK-18529][SS][KAFKA] Make failOnDataLoss=false work with Spark jobs ## What changes were proposed in this pull request? This PR adds `CachedKafkaConsumer.getAndIgnoreLostData` to handle corner cases

spark git commit: [SPARK-18373][SPARK-18529][SS][KAFKA] Make failOnDataLoss=false work with Spark jobs

2016-11-22 Thread tdas
Repository: spark Updated Branches: refs/heads/branch-2.1 fb2ea54a6 -> bd338f60d [SPARK-18373][SPARK-18529][SS][KAFKA] Make failOnDataLoss=false work with Spark jobs ## What changes were proposed in this pull request? This PR adds `CachedKafkaConsumer.getAndIgnoreLostData` to handle corner c

spark git commit: [SPARK-16803][SQL] SaveAsTable does not work when target table is a Hive serde table

2016-11-22 Thread lixiao
Repository: spark Updated Branches: refs/heads/branch-2.1 bd338f60d -> 64b9de9c0 [SPARK-16803][SQL] SaveAsTable does not work when target table is a Hive serde table ### What changes were proposed in this pull request? In Spark 2.0, `SaveAsTable` does not work when the target table is a Hive

spark git commit: [SPARK-16803][SQL] SaveAsTable does not work when target table is a Hive serde table

2016-11-22 Thread lixiao
Repository: spark Updated Branches: refs/heads/master 2fd101b2f -> 9c42d4a76 [SPARK-16803][SQL] SaveAsTable does not work when target table is a Hive serde table ### What changes were proposed in this pull request? In Spark 2.0, `SaveAsTable` does not work when the target table is a Hive ser

spark git commit: [SPARK-18533] Raise correct error upon specification of schema for datasource tables created using CTAS

2016-11-22 Thread lixiao
Repository: spark Updated Branches: refs/heads/master 9c42d4a76 -> 39a1d3063 [SPARK-18533] Raise correct error upon specification of schema for datasource tables created using CTAS ## What changes were proposed in this pull request? Fixes the inconsistency of error raised between data source

spark git commit: [SPARK-18533] Raise correct error upon specification of schema for datasource tables created using CTAS

2016-11-22 Thread lixiao
Repository: spark Updated Branches: refs/heads/branch-2.1 64b9de9c0 -> 4b96ffb13 [SPARK-18533] Raise correct error upon specification of schema for datasource tables created using CTAS ## What changes were proposed in this pull request? Fixes the inconsistency of error raised between data sou

spark git commit: [SPARK-18530][SS][KAFKA] Change Kafka timestamp column type to TimestampType

2016-11-22 Thread zsxwing
Repository: spark Updated Branches: refs/heads/branch-2.1 4b96ffb13 -> 3be2d1e0b [SPARK-18530][SS][KAFKA] Change Kafka timestamp column type to TimestampType ## What changes were proposed in this pull request? Changed Kafka timestamp column type to TimestampType. ## How was this patch tested

spark git commit: [SPARK-18530][SS][KAFKA] Change Kafka timestamp column type to TimestampType

2016-11-22 Thread zsxwing
Repository: spark Updated Branches: refs/heads/master 39a1d3063 -> d0212eb0f [SPARK-18530][SS][KAFKA] Change Kafka timestamp column type to TimestampType ## What changes were proposed in this pull request? Changed Kafka timestamp column type to TimestampType. ## How was this patch tested? `

spark git commit: [SPARK-18501][ML][SPARKR] Fix spark.glm errors when fitting on collinear data

2016-11-22 Thread yliang
Repository: spark Updated Branches: refs/heads/master d0212eb0f -> 982b82e32 [SPARK-18501][ML][SPARKR] Fix spark.glm errors when fitting on collinear data ## What changes were proposed in this pull request? * Fix SparkR ```spark.glm``` errors when fitting on collinear data, since ```standard

spark git commit: [SPARK-18501][ML][SPARKR] Fix spark.glm errors when fitting on collinear data

2016-11-22 Thread yliang
Repository: spark Updated Branches: refs/heads/branch-2.1 3be2d1e0b -> fc5fee83e [SPARK-18501][ML][SPARKR] Fix spark.glm errors when fitting on collinear data ## What changes were proposed in this pull request? * Fix SparkR ```spark.glm``` errors when fitting on collinear data, since ```stand

spark git commit: [SPARK-18179][SQL] Throws analysis exception with a proper message for unsupported argument types in reflect/java_method function

2016-11-22 Thread rxin
Repository: spark Updated Branches: refs/heads/master 982b82e32 -> 2559fb4b4 [SPARK-18179][SQL] Throws analysis exception with a proper message for unsupported argument types in reflect/java_method function ## What changes were proposed in this pull request? This PR proposes throwing an `Ana

spark git commit: [SPARK-18179][SQL] Throws analysis exception with a proper message for unsupported argument types in reflect/java_method function

2016-11-22 Thread rxin
Repository: spark Updated Branches: refs/heads/branch-2.1 fc5fee83e -> fabb5aeaf [SPARK-18179][SQL] Throws analysis exception with a proper message for unsupported argument types in reflect/java_method function ## What changes were proposed in this pull request? This PR proposes throwing an