spark git commit: [SPARK-10063] Follow-up: remove dead code related to an old output committer.

2017-02-03 Thread lixiao
Repository: spark Updated Branches: refs/heads/master 050c20cc9 -> 22d4aae8b [SPARK-10063] Follow-up: remove dead code related to an old output committer. ## What changes were proposed in this pull request? DirectParquetOutputCommitter was removed from Spark as it was deemed unsafe to use.

spark git commit: [SPARK-19386][SPARKR][FOLLOWUP] fix error in vignettes

2017-02-03 Thread lixiao
Repository: spark Updated Branches: refs/heads/master 48aafeda7 -> 050c20cc9 [SPARK-19386][SPARKR][FOLLOWUP] fix error in vignettes ## What changes were proposed in this pull request? Current version has error in vignettes: ``` model <- spark.bisectingKmeans(df, Sepal_Length ~ Sepal_Width, k

spark git commit: [SPARK-19386][SPARKR][DOC] Bisecting k-means in SparkR documentation

2017-02-03 Thread felixcheung
Repository: spark Updated Branches: refs/heads/master 2f523fa0c -> 48aafeda7 [SPARK-19386][SPARKR][DOC] Bisecting k-means in SparkR documentation ## What changes were proposed in this pull request? Update programming guide, example and vignette with Bisecting k-means. Author: krishnakalyan3

spark git commit: [SPARK-19244][CORE] Sort MemoryConsumers according to their memory usage when spilling

2017-02-03 Thread mridulm80
Repository: spark Updated Branches: refs/heads/master 52d4f6194 -> 2f523fa0c [SPARK-19244][CORE] Sort MemoryConsumers according to their memory usage when spilling ## What changes were proposed in this pull request? In `TaskMemoryManager `, when we acquire memory by calling

spark git commit: [SPARK-18909][SQL] The error messages in `ExpressionEncoder.toRow/fromRow` are too verbose

2017-02-03 Thread wenchen
Repository: spark Updated Branches: refs/heads/master 20b4ca140 -> 52d4f6194 [SPARK-18909][SQL] The error messages in `ExpressionEncoder.toRow/fromRow` are too verbose ## What changes were proposed in this pull request? In `ExpressionEncoder.toRow` and `fromRow`, we catch the exception and

spark git commit: [BUILD] Close stale PRs

2017-02-03 Thread srowen
Repository: spark Updated Branches: refs/heads/master bf493686e -> 20b4ca140 [BUILD] Close stale PRs Closes #15736 Closes #16309 Closes #16485 Closes #16502 Closes #16196 Closes #16498 Closes #12380 Closes #16764 Closes #14394 Closes #14204 Closes #14027 Closes #13690 Closes #16279 Author:

spark git commit: [SPARK-19411][SQL] Remove the metadata used to mark optional columns in merged Parquet schema for filter predicate pushdown

2017-02-03 Thread rxin
Repository: spark Updated Branches: refs/heads/master c86a57f4d -> bf493686e [SPARK-19411][SQL] Remove the metadata used to mark optional columns in merged Parquet schema for filter predicate pushdown ## What changes were proposed in this pull request? There is a metadata introduced before