[spark] Git Push Summary

2017-10-26 Thread lixiao
Repository: spark Updated Branches: refs/heads/test2.2 [deleted] cb54f297a - To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org

spark git commit: [SPARK-22356][SQL] data source table should support overlapped columns between data and partition schema

2017-10-26 Thread lixiao
Repository: spark Updated Branches: refs/heads/branch-2.2 2839280ad -> cb54f297a [SPARK-22356][SQL] data source table should support overlapped columns between data and partition schema This is a regression introduced by #14207. After Spark 2.1, we store the inferred schema when creating

spark git commit: [SPARK-22356][SQL] data source table should support overlapped columns between data and partition schema

2017-10-26 Thread lixiao
Repository: spark Updated Branches: refs/heads/test2.2 [created] cb54f297a [SPARK-22356][SQL] data source table should support overlapped columns between data and partition schema This is a regression introduced by #14207. After Spark 2.1, we store the inferred schema when creating the

spark git commit: [SPARK-22355][SQL] Dataset.collect is not threadsafe

2017-10-26 Thread lixiao
Repository: spark Updated Branches: refs/heads/branch-2.2 a607ddc52 -> 2839280ad [SPARK-22355][SQL] Dataset.collect is not threadsafe It's possible that users create a `Dataset`, and call `collect` of this `Dataset` in many threads at the same time. Currently `Dataset#collect` just call

spark git commit: [SPARK-22355][SQL] Dataset.collect is not threadsafe

2017-10-26 Thread lixiao
Repository: spark Updated Branches: refs/heads/master 9b262f6a0 -> 5c3a1f3fa [SPARK-22355][SQL] Dataset.collect is not threadsafe ## What changes were proposed in this pull request? It's possible that users create a `Dataset`, and call `collect` of this `Dataset` in many threads at the same

spark git commit: [SPARK-22356][SQL] data source table should support overlapped columns between data and partition schema

2017-10-26 Thread lixiao
Repository: spark Updated Branches: refs/heads/master 8e9863531 -> 9b262f6a0 [SPARK-22356][SQL] data source table should support overlapped columns between data and partition schema ## What changes were proposed in this pull request? This is a regression introduced by #14207. After Spark

spark git commit: [SPARK-22366] Support ignoring missing files

2017-10-26 Thread zsxwing
Repository: spark Updated Branches: refs/heads/master 5415963d2 -> 8e9863531 [SPARK-22366] Support ignoring missing files ## What changes were proposed in this pull request? Add a flag "spark.sql.files.ignoreMissingFiles" to parallel the existing flag "spark.sql.files.ignoreCorruptFiles".

spark git commit: [SPARK-22131][MESOS] Mesos driver secrets

2017-10-26 Thread vanzin
Repository: spark Updated Branches: refs/heads/master 4f8dc6b01 -> 5415963d2 [SPARK-22131][MESOS] Mesos driver secrets ## Background In #18837 , ArtRand added Mesos secrets support to the dispatcher. **This PR is to add the same secrets support to the drivers.** This means if the secret

spark git commit: [SPARK-22328][CORE] ClosureCleaner should not miss referenced superclass fields

2017-10-26 Thread wenchen
Repository: spark Updated Branches: refs/heads/branch-2.2 24fe7ccba -> a607ddc52 [SPARK-22328][CORE] ClosureCleaner should not miss referenced superclass fields When the given closure uses some fields defined in super class, `ClosureCleaner` can't figure them and don't set it properly. Those

spark git commit: [SPARK-22328][CORE] ClosureCleaner should not miss referenced superclass fields

2017-10-26 Thread wenchen
Repository: spark Updated Branches: refs/heads/master 0e9a750a8 -> 4f8dc6b01 [SPARK-22328][CORE] ClosureCleaner should not miss referenced superclass fields ## What changes were proposed in this pull request? When the given closure uses some fields defined in super class, `ClosureCleaner`

spark git commit: [SPARK-20643][CORE] Add listener implementation to collect app state.

2017-10-26 Thread irashid
Repository: spark Updated Branches: refs/heads/master a83d8d5ad -> 0e9a750a8 [SPARK-20643][CORE] Add listener implementation to collect app state. The initial listener code is based on the existing JobProgressListener (and others), and tries to mimic their behavior as much as possible. The

spark git commit: [SPARK-17902][R] Revive stringsAsFactors option for collect() in SparkR

2017-10-26 Thread gurwls223
Repository: spark Updated Branches: refs/heads/branch-2.1 3e77b7481 -> aa023fddb [SPARK-17902][R] Revive stringsAsFactors option for collect() in SparkR ## What changes were proposed in this pull request? This PR proposes to revive `stringsAsFactors` option in collect API, which was

spark git commit: [SPARK-17902][R] Revive stringsAsFactors option for collect() in SparkR

2017-10-26 Thread gurwls223
Repository: spark Updated Branches: refs/heads/master 3073344a2 -> a83d8d5ad [SPARK-17902][R] Revive stringsAsFactors option for collect() in SparkR ## What changes were proposed in this pull request? This PR proposes to revive `stringsAsFactors` option in collect API, which was mistakenly

spark git commit: [SPARK-17902][R] Revive stringsAsFactors option for collect() in SparkR

2017-10-26 Thread gurwls223
Repository: spark Updated Branches: refs/heads/branch-2.2 d2dc175a1 -> 24fe7ccba [SPARK-17902][R] Revive stringsAsFactors option for collect() in SparkR ## What changes were proposed in this pull request? This PR proposes to revive `stringsAsFactors` option in collect API, which was

spark git commit: [SPARK-21840][CORE] Add trait that allows conf to be directly set in application.

2017-10-26 Thread jshao
Repository: spark Updated Branches: refs/heads/master 592cfeab9 -> 3073344a2 [SPARK-21840][CORE] Add trait that allows conf to be directly set in application. Currently SparkSubmit uses system properties to propagate configuration to applications. This makes it hard to implement features

spark git commit: [SPARK-22308] Support alternative unit testing styles in external applications

2017-10-26 Thread lixiao
Repository: spark Updated Branches: refs/heads/master 5433be44c -> 592cfeab9 [SPARK-22308] Support alternative unit testing styles in external applications ## What changes were proposed in this pull request? Support unit tests of external code (i.e., applications that use spark) using