spark git commit: [SPARK-16021][TEST-MAVEN] Fix the maven build

2016-07-06 Thread rxin
Repository: spark Updated Branches: refs/heads/master 69f539140 -> 4b5a72c7d [SPARK-16021][TEST-MAVEN] Fix the maven build ## What changes were proposed in this pull request? Fixed the maven build for #13983 ## How was this patch tested? The existing tests. Author: Shixiong Zhu

spark git commit: [SPARK-16398][CORE] Make cancelJob and cancelStage APIs public

2016-07-06 Thread rxin
Repository: spark Updated Branches: refs/heads/master 42279bff6 -> 69f539140 [SPARK-16398][CORE] Make cancelJob and cancelStage APIs public ## What changes were proposed in this pull request? Make SparkContext `cancelJob` and `cancelStage` APIs public. This allows applications to use

spark git commit: [SPARK-16374][SQL] Remove Alias from MetastoreRelation and SimpleCatalogRelation

2016-07-06 Thread wenchen
Repository: spark Updated Branches: refs/heads/master 34283de16 -> 42279bff6 [SPARK-16374][SQL] Remove Alias from MetastoreRelation and SimpleCatalogRelation What changes were proposed in this pull request? Different from the other leaf nodes, `MetastoreRelation` and

spark git commit: [SPARK-14839][SQL] Support for other types for `tableProperty` rule in SQL syntax

2016-07-06 Thread hvanhovell
Repository: spark Updated Branches: refs/heads/master 44c7c62bc -> 34283de16 [SPARK-14839][SQL] Support for other types for `tableProperty` rule in SQL syntax ## What changes were proposed in this pull request? Currently, Scala API supports to take options with the types, `String`, `Long`,

spark git commit: [SPARK-16021] Fill freed memory in test to help catch correctness bugs

2016-07-06 Thread rxin
Repository: spark Updated Branches: refs/heads/master b8ebf63c1 -> 44c7c62bc [SPARK-16021] Fill freed memory in test to help catch correctness bugs ## What changes were proposed in this pull request? This patches `MemoryAllocator` to fill clean and freed memory with known byte values,

spark git commit: [SPARK-16212][STREAMING][KAFKA] apply test tweaks from 0-10 to 0-8 as well

2016-07-06 Thread tdas
Repository: spark Updated Branches: refs/heads/master 8e3e4ed6c -> b8ebf63c1 [SPARK-16212][STREAMING][KAFKA] apply test tweaks from 0-10 to 0-8 as well ## What changes were proposed in this pull request? Bring the kafka-0-8 subproject up to date with some test modifications from development

spark git commit: [SPARK-16212][STREAMING][KAFKA] apply test tweaks from 0-10 to 0-8 as well

2016-07-06 Thread tdas
Repository: spark Updated Branches: refs/heads/branch-2.0 05ddc7517 -> 920162a1e [SPARK-16212][STREAMING][KAFKA] apply test tweaks from 0-10 to 0-8 as well ## What changes were proposed in this pull request? Bring the kafka-0-8 subproject up to date with some test modifications from

spark git commit: [SPARK-16371][SQL] Two follow-up tasks

2016-07-06 Thread rxin
Repository: spark Updated Branches: refs/heads/branch-2.0 2c2b8f121 -> 05ddc7517 [SPARK-16371][SQL] Two follow-up tasks ## What changes were proposed in this pull request? This is a small follow-up for SPARK-16371: 1. Hide removeMetadata from public API. 2. Add JIRA ticket number to test

spark git commit: [SPARK-16371][SQL] Two follow-up tasks

2016-07-06 Thread rxin
Repository: spark Updated Branches: refs/heads/master 9c041990c -> 8e3e4ed6c [SPARK-16371][SQL] Two follow-up tasks ## What changes were proposed in this pull request? This is a small follow-up for SPARK-16371: 1. Hide removeMetadata from public API. 2. Add JIRA ticket number to test case

spark git commit: [MESOS] expand coarse-grained mode docs

2016-07-06 Thread rxin
Repository: spark Updated Branches: refs/heads/branch-2.0 88be66b93 -> 2c2b8f121 [MESOS] expand coarse-grained mode docs ## What changes were proposed in this pull request? docs ## How was this patch tested? viewed the docs in github Author: Michael Gummelt

spark git commit: [MESOS] expand coarse-grained mode docs

2016-07-06 Thread rxin
Repository: spark Updated Branches: refs/heads/master a8f89df3b -> 9c041990c [MESOS] expand coarse-grained mode docs ## What changes were proposed in this pull request? docs ## How was this patch tested? viewed the docs in github Author: Michael Gummelt Closes

spark git commit: [SPARK-16379][CORE][MESOS] Spark on mesos is broken due to race condition in Logging

2016-07-06 Thread rxin
Repository: spark Updated Branches: refs/heads/master 040f6f9f4 -> a8f89df3b [SPARK-16379][CORE][MESOS] Spark on mesos is broken due to race condition in Logging ## What changes were proposed in this pull request? The commit

spark git commit: [SPARK-16379][CORE][MESOS] Spark on mesos is broken due to race condition in Logging

2016-07-06 Thread rxin
Repository: spark Updated Branches: refs/heads/branch-2.0 d7926da5e -> 88be66b93 [SPARK-16379][CORE][MESOS] Spark on mesos is broken due to race condition in Logging ## What changes were proposed in this pull request? The commit

spark git commit: [SPARK-15740][MLLIB] Word2VecSuite "big model load / save" caused OOM in maven jenkins builds

2016-07-06 Thread jkbradley
Repository: spark Updated Branches: refs/heads/branch-2.0 2465f0728 -> d7926da5e [SPARK-15740][MLLIB] Word2VecSuite "big model load / save" caused OOM in maven jenkins builds ## What changes were proposed in this pull request? "test big model load / save" in Word2VecSuite, lately resulted

spark git commit: [SPARK-15740][MLLIB] Word2VecSuite "big model load / save" caused OOM in maven jenkins builds

2016-07-06 Thread jkbradley
Repository: spark Updated Branches: refs/heads/master 4f8ceed59 -> 040f6f9f4 [SPARK-15740][MLLIB] Word2VecSuite "big model load / save" caused OOM in maven jenkins builds ## What changes were proposed in this pull request? "test big model load / save" in Word2VecSuite, lately resulted into

spark git commit: [SPARK-16371][SQL] Do not push down filters incorrectly when inner name and outer name are the same in Parquet

2016-07-06 Thread rxin
Repository: spark Updated Branches: refs/heads/branch-2.0 03f336d89 -> 2465f0728 [SPARK-16371][SQL] Do not push down filters incorrectly when inner name and outer name are the same in Parquet ## What changes were proposed in this pull request? Currently, if there is a schema as below: ```

spark git commit: [SPARK-16371][SQL] Do not push down filters incorrectly when inner name and outer name are the same in Parquet

2016-07-06 Thread rxin
Repository: spark Updated Branches: refs/heads/master 480357cc6 -> 4f8ceed59 [SPARK-16371][SQL] Do not push down filters incorrectly when inner name and outer name are the same in Parquet ## What changes were proposed in this pull request? Currently, if there is a schema as below: ``` root

spark git commit: [SPARK-16304] LinkageError should not crash Spark executor

2016-07-06 Thread rxin
Repository: spark Updated Branches: refs/heads/master 4e14199ff -> 480357cc6 [SPARK-16304] LinkageError should not crash Spark executor ## What changes were proposed in this pull request? This patch updates the failure handling logic so Spark executor does not crash when seeing LinkageError.

spark git commit: [MINOR][PYSPARK][DOC] Fix wrongly formatted examples in PySpark documentation

2016-07-06 Thread rxin
Repository: spark Updated Branches: refs/heads/master b1310425b -> 4e14199ff [MINOR][PYSPARK][DOC] Fix wrongly formatted examples in PySpark documentation ## What changes were proposed in this pull request? This PR fixes wrongly formatted examples in PySpark documentation as below: -

spark git commit: [MINOR][PYSPARK][DOC] Fix wrongly formatted examples in PySpark documentation

2016-07-06 Thread rxin
Repository: spark Updated Branches: refs/heads/branch-2.0 091cd5f26 -> 03f336d89 [MINOR][PYSPARK][DOC] Fix wrongly formatted examples in PySpark documentation ## What changes were proposed in this pull request? This PR fixes wrongly formatted examples in PySpark documentation as below: -

spark git commit: [DOC][SQL] update out-of-date code snippets using SQLContext in all documents.

2016-07-06 Thread rxin
Repository: spark Updated Branches: refs/heads/master 23eff5e51 -> b1310425b [DOC][SQL] update out-of-date code snippets using SQLContext in all documents. ## What changes were proposed in this pull request? I search the whole documents directory using SQLContext, and update the following

spark git commit: [DOC][SQL] update out-of-date code snippets using SQLContext in all documents.

2016-07-06 Thread rxin
Repository: spark Updated Branches: refs/heads/branch-2.0 e956bd775 -> 091cd5f26 [DOC][SQL] update out-of-date code snippets using SQLContext in all documents. ## What changes were proposed in this pull request? I search the whole documents directory using SQLContext, and update the

spark git commit: [SPARK-15979][SQL] Renames CatalystWriteSupport to ParquetWriteSupport

2016-07-06 Thread rxin
Repository: spark Updated Branches: refs/heads/master 478b71d02 -> 23eff5e51 [SPARK-15979][SQL] Renames CatalystWriteSupport to ParquetWriteSupport ## What changes were proposed in this pull request? PR #13696 renamed various Parquet support classes but left `CatalystWriteSupport` behind.

spark git commit: [SPARK-15591][WEBUI] Paginate Stage Table in Stages tab

2016-07-06 Thread zsxwing
Repository: spark Updated Branches: refs/heads/master 21eadd1d8 -> 478b71d02 [SPARK-15591][WEBUI] Paginate Stage Table in Stages tab ## What changes were proposed in this pull request? This patch adds pagination support for the Stage Tables in the Stage tab. Pagination is provided for all

spark git commit: [MINOR][CORE][1.6-BACKPORT] Fix display wrong free memory size in the log

2016-07-06 Thread srowen
Repository: spark Updated Branches: refs/heads/branch-1.6 76781950f -> 2588776ad [MINOR][CORE][1.6-BACKPORT] Fix display wrong free memory size in the log ## What changes were proposed in this pull request? Free memory size displayed in the log is wrong (used memory), fix to make it

spark git commit: [SPARK-16229][SQL] Drop Empty Table After CREATE TABLE AS SELECT fails

2016-07-06 Thread wenchen
Repository: spark Updated Branches: refs/heads/branch-2.0 d5d2457e4 -> e956bd775 [SPARK-16229][SQL] Drop Empty Table After CREATE TABLE AS SELECT fails What changes were proposed in this pull request? In `CREATE TABLE AS SELECT`, if the `SELECT` query failed, the table should not exist.

spark git commit: [SPARK-16229][SQL] Drop Empty Table After CREATE TABLE AS SELECT fails

2016-07-06 Thread wenchen
Repository: spark Updated Branches: refs/heads/master 909c6d812 -> 21eadd1d8 [SPARK-16229][SQL] Drop Empty Table After CREATE TABLE AS SELECT fails What changes were proposed in this pull request? In `CREATE TABLE AS SELECT`, if the `SELECT` query failed, the table should not exist. For

spark git commit: [SPARK-15968][SQL] Nonempty partitioned metastore tables are not cached

2016-07-06 Thread wenchen
Repository: spark Updated Branches: refs/heads/branch-2.0 25006c8bc -> d5d2457e4 [SPARK-15968][SQL] Nonempty partitioned metastore tables are not cached This PR backports your fix (https://github.com/apache/spark/pull/13818) to branch 2.0. This PR addresses

spark git commit: [MINOR][BUILD] Download Maven 3.3.9 instead of 3.3.3 because the latter is no longer published on Apache mirrors

2016-07-06 Thread srowen
Repository: spark Updated Branches: refs/heads/branch-1.6 4fcb88843 -> 76781950f [MINOR][BUILD] Download Maven 3.3.9 instead of 3.3.3 because the latter is no longer published on Apache mirrors ## What changes were proposed in this pull request? Download Maven 3.3.9 instead of 3.3.3 because

spark git commit: [SPARK-16307][ML] Add test to verify the predicted variances of a DT on toy data

2016-07-06 Thread yliang
Repository: spark Updated Branches: refs/heads/master 7e28fabdf -> 909c6d812 [SPARK-16307][ML] Add test to verify the predicted variances of a DT on toy data ## What changes were proposed in this pull request? The current tests assumes that `impurity.calculate()` returns the variance

spark git commit: [SPARK-16388][SQL] Remove spark.sql.nativeView and spark.sql.nativeView.canonical config

2016-07-06 Thread lian
Repository: spark Updated Branches: refs/heads/master 5497242c7 -> 7e28fabdf [SPARK-16388][SQL] Remove spark.sql.nativeView and spark.sql.nativeView.canonical config ## What changes were proposed in this pull request? These two configs should always be true after Spark 2.0. This patch

spark git commit: [SPARK-16249][ML] Change visibility of Object ml.clustering.LDA to public for loading

2016-07-06 Thread yliang
Repository: spark Updated Branches: refs/heads/branch-2.0 521fc7186 -> 25006c8bc [SPARK-16249][ML] Change visibility of Object ml.clustering.LDA to public for loading ## What changes were proposed in this pull request? jira: https://issues.apache.org/jira/browse/SPARK-16249 Change visibility

spark git commit: [SPARK-16249][ML] Change visibility of Object ml.clustering.LDA to public for loading

2016-07-06 Thread yliang
Repository: spark Updated Branches: refs/heads/master 5f342049c -> 5497242c7 [SPARK-16249][ML] Change visibility of Object ml.clustering.LDA to public for loading ## What changes were proposed in this pull request? jira: https://issues.apache.org/jira/browse/SPARK-16249 Change visibility of

spark git commit: [SPARK-16339][CORE] ScriptTransform does not print stderr when outstream is lost

2016-07-06 Thread srowen
Repository: spark Updated Branches: refs/heads/branch-2.0 6e8fa86eb -> 521fc7186 [SPARK-16339][CORE] ScriptTransform does not print stderr when outstream is lost ## What changes were proposed in this pull request? Currently, if due to some failure, the outstream gets destroyed or closed and

spark git commit: [SPARK-16339][CORE] ScriptTransform does not print stderr when outstream is lost

2016-07-06 Thread srowen
Repository: spark Updated Branches: refs/heads/master ec79183ac -> 5f342049c [SPARK-16339][CORE] ScriptTransform does not print stderr when outstream is lost ## What changes were proposed in this pull request? Currently, if due to some failure, the outstream gets destroyed or closed and