spark git commit: [SPARK-15235][WEBUI] Corresponding row cannot be highlighted even though cursor is on the job on Web UI's timeline

2016-05-10 Thread rxin
Repository: spark Updated Branches: refs/heads/master 9f0a642f8 -> ba181c0c7 [SPARK-15235][WEBUI] Corresponding row cannot be highlighted even though cursor is on the job on Web UI's timeline ## What changes were proposed in this pull request? To extract job descriptions and stage name,

spark git commit: [SPARK-15235][WEBUI] Corresponding row cannot be highlighted even though cursor is on the job on Web UI's timeline

2016-05-10 Thread rxin
Repository: spark Updated Branches: refs/heads/branch-2.0 d9288b804 -> ca5ce5365 [SPARK-15235][WEBUI] Corresponding row cannot be highlighted even though cursor is on the job on Web UI's timeline ## What changes were proposed in this pull request? To extract job descriptions and stage name,

spark git commit: [SPARK-15246][SPARK-4452][CORE] Fix code style and improve volatile for

2016-05-10 Thread rxin
Repository: spark Updated Branches: refs/heads/master 1fbe2785d -> 9f0a642f8 [SPARK-15246][SPARK-4452][CORE] Fix code style and improve volatile for ## What changes were proposed in this pull request? 1. Fix code style 2. remove volatile of elementsRead method because there is only one thread

spark git commit: [SPARK-15246][SPARK-4452][CORE] Fix code style and improve volatile for

2016-05-10 Thread rxin
Repository: spark Updated Branches: refs/heads/branch-2.0 1b446a461 -> d9288b804 [SPARK-15246][SPARK-4452][CORE] Fix code style and improve volatile for ## What changes were proposed in this pull request? 1. Fix code style 2. remove volatile of elementsRead method because there is only one

spark git commit: [SPARK-15255][SQL] limit the length of name for cached DataFrame

2016-05-10 Thread rxin
Repository: spark Updated Branches: refs/heads/master 665545960 -> 1fbe2785d [SPARK-15255][SQL] limit the length of name for cached DataFrame ## What changes were proposed in this pull request? We use the tree string of an SparkPlan as the name of cached DataFrame, that could be very long,

spark git commit: [SPARK-15255][SQL] limit the length of name for cached DataFrame

2016-05-10 Thread rxin
Repository: spark Updated Branches: refs/heads/branch-2.0 a675f5e1d -> 1b446a461 [SPARK-15255][SQL] limit the length of name for cached DataFrame ## What changes were proposed in this pull request? We use the tree string of an SparkPlan as the name of cached DataFrame, that could be very

spark git commit: [SPARK-15265][SQL][MINOR] Fix Union query error message indentation

2016-05-10 Thread rxin
Repository: spark Updated Branches: refs/heads/master 3ff012051 -> 665545960 [SPARK-15265][SQL][MINOR] Fix Union query error message indentation ## What changes were proposed in this pull request? This issue fixes the error message indentation consistently with other set queries

spark git commit: [SPARK-15265][SQL][MINOR] Fix Union query error message indentation

2016-05-10 Thread rxin
Repository: spark Updated Branches: refs/heads/branch-2.0 0ecc105d2 -> a675f5e1d [SPARK-15265][SQL][MINOR] Fix Union query error message indentation ## What changes were proposed in this pull request? This issue fixes the error message indentation consistently with other set queries

spark git commit: [SPARK-15250][SQL] Remove deprecated json API in DataFrameReader

2016-05-10 Thread rxin
Repository: spark Updated Branches: refs/heads/master 5a5b83c97 -> 3ff012051 [SPARK-15250][SQL] Remove deprecated json API in DataFrameReader ## What changes were proposed in this pull request? This PR removes the old `json(path: String)` API which is covered by the new `json(paths:

spark git commit: [SPARK-15250][SQL] Remove deprecated json API in DataFrameReader

2016-05-10 Thread rxin
Repository: spark Updated Branches: refs/heads/branch-2.0 03dfe7830 -> 0ecc105d2 [SPARK-15250][SQL] Remove deprecated json API in DataFrameReader ## What changes were proposed in this pull request? This PR removes the old `json(path: String)` API which is covered by the new `json(paths:

spark git commit: [SPARK-15261][SQL] Remove experimental tag from DataFrameReader/Writer

2016-05-10 Thread rxin
Repository: spark Updated Branches: refs/heads/branch-2.0 5e3192a9a -> 03dfe7830 [SPARK-15261][SQL] Remove experimental tag from DataFrameReader/Writer ## What changes were proposed in this pull request? This patch removes experimental tag from DataFrameReader and DataFrameWriter, and

spark git commit: [SPARK-15261][SQL] Remove experimental tag from DataFrameReader/Writer

2016-05-10 Thread rxin
Repository: spark Updated Branches: refs/heads/master 61e0bdcff -> 5a5b83c97 [SPARK-15261][SQL] Remove experimental tag from DataFrameReader/Writer ## What changes were proposed in this pull request? This patch removes experimental tag from DataFrameReader and DataFrameWriter, and explicitly

spark git commit: [SPARK-14476][SQL] Improve the physical plan visualization by adding meta info like table name and file path for data source.

2016-05-10 Thread rxin
Repository: spark Updated Branches: refs/heads/branch-2.0 d8c2da9a4 -> 5e3192a9a [SPARK-14476][SQL] Improve the physical plan visualization by adding meta info like table name and file path for data source. ## What changes were proposed in this pull request? Improve the physical plan

spark git commit: [SPARK-14837][SQL][STREAMING] Added support in file stream source for reading new files added to subdirs

2016-05-10 Thread yhuai
Repository: spark Updated Branches: refs/heads/master 86475520f -> d9ca9fd3e [SPARK-14837][SQL][STREAMING] Added support in file stream source for reading new files added to subdirs ## What changes were proposed in this pull request? Currently, file stream source can only find new files if

spark git commit: [SPARK-14837][SQL][STREAMING] Added support in file stream source for reading new files added to subdirs

2016-05-10 Thread yhuai
Repository: spark Updated Branches: refs/heads/branch-2.0 f021f3460 -> d8c2da9a4 [SPARK-14837][SQL][STREAMING] Added support in file stream source for reading new files added to subdirs ## What changes were proposed in this pull request? Currently, file stream source can only find new files

spark git commit: [SPARK-14936][BUILD][TESTS] FlumePollingStreamSuite is slow

2016-05-10 Thread zsxwing
Repository: spark Updated Branches: refs/heads/branch-2.0 1db027d11 -> f021f3460 [SPARK-14936][BUILD][TESTS] FlumePollingStreamSuite is slow https://issues.apache.org/jira/browse/SPARK-14936 ## What changes were proposed in this pull request? FlumePollingStreamSuite contains two tests which

spark git commit: [SPARK-14936][BUILD][TESTS] FlumePollingStreamSuite is slow

2016-05-10 Thread zsxwing
Repository: spark Updated Branches: refs/heads/master da02d006b -> 86475520f [SPARK-14936][BUILD][TESTS] FlumePollingStreamSuite is slow https://issues.apache.org/jira/browse/SPARK-14936 ## What changes were proposed in this pull request? FlumePollingStreamSuite contains two tests which run

spark git commit: [SPARK-15249][SQL] Use FunctionResource instead of (String, String) in CreateFunction and CatalogFunction for resource

2016-05-10 Thread andrewor14
Repository: spark Updated Branches: refs/heads/master 9533f5390 -> da02d006b [SPARK-15249][SQL] Use FunctionResource instead of (String, String) in CreateFunction and CatalogFunction for resource Use FunctionResource instead of (String, String) in CreateFunction and CatalogFunction for

spark git commit: [SPARK-6005][TESTS] Fix flaky test: o.a.s.streaming.kafka.DirectKafkaStreamSuite.offset recovery

2016-05-10 Thread zsxwing
Repository: spark Updated Branches: refs/heads/master 603c4f8eb -> 9533f5390 [SPARK-6005][TESTS] Fix flaky test: o.a.s.streaming.kafka.DirectKafkaStreamSuite.offset recovery ## What changes were proposed in this pull request? Because this test extracts data from `DStream.generatedRDDs`

spark git commit: [SPARK-6005][TESTS] Fix flaky test: o.a.s.streaming.kafka.DirectKafkaStreamSuite.offset recovery

2016-05-10 Thread zsxwing
Repository: spark Updated Branches: refs/heads/branch-2.0 0ab195886 -> 95f254994 [SPARK-6005][TESTS] Fix flaky test: o.a.s.streaming.kafka.DirectKafkaStreamSuite.offset recovery ## What changes were proposed in this pull request? Because this test extracts data from `DStream.generatedRDDs`

spark git commit: [SPARK-15207][BUILD] Use Travis CI for Java Linter and JDK7/8 compilation test

2016-05-10 Thread srowen
Repository: spark Updated Branches: refs/heads/master d28c67544 -> 603c4f8eb [SPARK-15207][BUILD] Use Travis CI for Java Linter and JDK7/8 compilation test ## What changes were proposed in this pull request? Currently, Java Linter is disabled in Jenkins tests.

spark git commit: [SPARK-14986][SQL] Return correct result for empty LATERAL VIEW OUTER

2016-05-10 Thread yhuai
Repository: spark Updated Branches: refs/heads/branch-2.0 5a4a188fe -> 0ab195886 [SPARK-14986][SQL] Return correct result for empty LATERAL VIEW OUTER ## What changes were proposed in this pull request? A Generate with the `outer` flag enabled should always return one or more rows for every

spark git commit: [SPARK-14986][SQL] Return correct result for empty LATERAL VIEW OUTER

2016-05-10 Thread yhuai
Repository: spark Updated Branches: refs/heads/master 89f73f674 -> d28c67544 [SPARK-14986][SQL] Return correct result for empty LATERAL VIEW OUTER ## What changes were proposed in this pull request? A Generate with the `outer` flag enabled should always return one or more rows for every

spark git commit: [SPARK-14642][SQL] import org.apache.spark.sql.expressions._ breaks udf under functions

2016-05-10 Thread zsxwing
Repository: spark Updated Branches: refs/heads/branch-2.0 82f69594f -> 5a4a188fe [SPARK-14642][SQL] import org.apache.spark.sql.expressions._ breaks udf under functions ## What changes were proposed in this pull request? PR fixes the import issue which breaks udf functions. The following

spark git commit: [SPARK-14642][SQL] import org.apache.spark.sql.expressions._ breaks udf under functions

2016-05-10 Thread zsxwing
Repository: spark Updated Branches: refs/heads/master 93353b011 -> 89f73f674 [SPARK-14642][SQL] import org.apache.spark.sql.expressions._ breaks udf under functions ## What changes were proposed in this pull request? PR fixes the import issue which breaks udf functions. The following code

spark git commit: [SPARK-15195][PYSPARK][DOCS] Update ml.tuning PyDocs

2016-05-10 Thread mlnick
Repository: spark Updated Branches: refs/heads/branch-2.0 a432e80b8 -> 82f69594f [SPARK-15195][PYSPARK][DOCS] Update ml.tuning PyDocs ## What changes were proposed in this pull request? Tag classes in ml.tuning as experimental, add docs for kfolds avg metric, and copy TrainValidationSplit

spark git commit: [SPARK-15195][PYSPARK][DOCS] Update ml.tuning PyDocs

2016-05-10 Thread mlnick
Repository: spark Updated Branches: refs/heads/master 69641066a -> 93353b011 [SPARK-15195][PYSPARK][DOCS] Update ml.tuning PyDocs ## What changes were proposed in this pull request? Tag classes in ml.tuning as experimental, add docs for kfolds avg metric, and copy TrainValidationSplit

spark git commit: [SPARK-15037][HOTFIX] Don't create 2 SparkSessions in constructor

2016-05-10 Thread andrewor14
Repository: spark Updated Branches: refs/heads/master db3b4a201 -> 69641066a [SPARK-15037][HOTFIX] Don't create 2 SparkSessions in constructor ## What changes were proposed in this pull request? After #12907 `TestSparkSession` creates a spark session in one of the constructors just to get

spark git commit: [SPARK-15037][HOTFIX] Replace `sqlContext` and `sparkSession` with `spark`.

2016-05-10 Thread andrewor14
Repository: spark Updated Branches: refs/heads/master cddb9da07 -> db3b4a201 [SPARK-15037][HOTFIX] Replace `sqlContext` and `sparkSession` with `spark`. This replaces `sparkSession` with `spark` in CatalogSuite.scala. Pass the Jenkins tests. Author: Dongjoon Hyun

spark git commit: [SPARK-15037][HOTFIX] Replace `sqlContext` and `sparkSession` with `spark`.

2016-05-10 Thread andrewor14
Repository: spark Updated Branches: refs/heads/branch-2.0 42db140c5 -> bd7fd14c9 [SPARK-15037][HOTFIX] Replace `sqlContext` and `sparkSession` with `spark`. This replaces `sparkSession` with `spark` in CatalogSuite.scala. Pass the Jenkins tests. Author: Dongjoon Hyun

spark git commit: [HOTFIX] SQL test compilation error from merge conflict

2016-05-10 Thread andrewor14
Repository: spark Updated Branches: refs/heads/master 5c6b08557 -> cddb9da07 [HOTFIX] SQL test compilation error from merge conflict Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/cddb9da0 Tree:

spark git commit: [SPARK-14603][SQL] Verification of Metadata Operations by Session Catalog

2016-05-10 Thread andrewor14
Repository: spark Updated Branches: refs/heads/branch-2.0 5bf74b44d -> 42db140c5 [SPARK-14603][SQL] Verification of Metadata Operations by Session Catalog Since we cannot really trust if the underlying external catalog can throw exceptions when there is an invalid metadata operation, let's

spark git commit: [SPARK-14603][SQL] Verification of Metadata Operations by Session Catalog

2016-05-10 Thread andrewor14
Repository: spark Updated Branches: refs/heads/master ed0b4070f -> 5c6b08557 [SPARK-14603][SQL] Verification of Metadata Operations by Session Catalog Since we cannot really trust if the underlying external catalog can throw exceptions when there is an invalid metadata operation, let's do it

[07/10] spark git commit: [SPARK-15037][SQL][MLLIB] Use SparkSession instead of SQLContext in Scala/Java TestSuites

2016-05-10 Thread andrewor14
http://git-wip-us.apache.org/repos/asf/spark/blob/ed0b4070/mllib/src/test/scala/org/apache/spark/ml/feature/StopWordsRemoverSuite.scala -- diff --git a/mllib/src/test/scala/org/apache/spark/ml/feature/StopWordsRemoverSuite.scala

[08/10] spark git commit: [SPARK-15037][SQL][MLLIB] Use SparkSession instead of SQLContext in Scala/Java TestSuites

2016-05-10 Thread andrewor14
http://git-wip-us.apache.org/repos/asf/spark/blob/5bf74b44/mllib/src/test/java/org/apache/spark/mllib/regression/JavaIsotonicRegressionSuite.java -- diff --git

[02/10] spark git commit: [SPARK-15037][SQL][MLLIB] Use SparkSession instead of SQLContext in Scala/Java TestSuites

2016-05-10 Thread andrewor14
http://git-wip-us.apache.org/repos/asf/spark/blob/ed0b4070/sql/core/src/test/scala/org/apache/spark/sql/sources/PartitionedWriteSuite.scala -- diff --git

[02/10] spark git commit: [SPARK-15037][SQL][MLLIB] Use SparkSession instead of SQLContext in Scala/Java TestSuites

2016-05-10 Thread andrewor14
http://git-wip-us.apache.org/repos/asf/spark/blob/5bf74b44/sql/core/src/test/scala/org/apache/spark/sql/sources/PartitionedWriteSuite.scala -- diff --git

[05/10] spark git commit: [SPARK-15037][SQL][MLLIB] Use SparkSession instead of SQLContext in Scala/Java TestSuites

2016-05-10 Thread andrewor14
http://git-wip-us.apache.org/repos/asf/spark/blob/5bf74b44/sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala -- diff --git a/sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala

[10/10] spark git commit: [SPARK-15037][SQL][MLLIB] Use SparkSession instead of SQLContext in Scala/Java TestSuites

2016-05-10 Thread andrewor14
[SPARK-15037][SQL][MLLIB] Use SparkSession instead of SQLContext in Scala/Java TestSuites ## What changes were proposed in this pull request? Use SparkSession instead of SQLContext in Scala/Java TestSuites as this PR already very big working Python TestSuites in a diff PR. ## How was this patch

[06/10] spark git commit: [SPARK-15037][SQL][MLLIB] Use SparkSession instead of SQLContext in Scala/Java TestSuites

2016-05-10 Thread andrewor14
http://git-wip-us.apache.org/repos/asf/spark/blob/5bf74b44/sql/core/src/test/java/test/org/apache/spark/sql/sources/JavaDatasetAggregatorSuiteBase.java -- diff --git

[04/10] spark git commit: [SPARK-15037][SQL][MLLIB] Use SparkSession instead of SQLContext in Scala/Java TestSuites

2016-05-10 Thread andrewor14
http://git-wip-us.apache.org/repos/asf/spark/blob/5bf74b44/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/json/JsonParsingOptionsSuite.scala -- diff --git

[07/10] spark git commit: [SPARK-15037][SQL][MLLIB] Use SparkSession instead of SQLContext in Scala/Java TestSuites

2016-05-10 Thread andrewor14
http://git-wip-us.apache.org/repos/asf/spark/blob/5bf74b44/mllib/src/test/scala/org/apache/spark/ml/feature/StopWordsRemoverSuite.scala -- diff --git a/mllib/src/test/scala/org/apache/spark/ml/feature/StopWordsRemoverSuite.scala

[01/10] spark git commit: [SPARK-15037][SQL][MLLIB] Use SparkSession instead of SQLContext in Scala/Java TestSuites

2016-05-10 Thread andrewor14
Repository: spark Updated Branches: refs/heads/branch-2.0 19a9c23c2 -> 5bf74b44d http://git-wip-us.apache.org/repos/asf/spark/blob/5bf74b44/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/AggregationQuerySuite.scala

[09/10] spark git commit: [SPARK-15037][SQL][MLLIB] Use SparkSession instead of SQLContext in Scala/Java TestSuites

2016-05-10 Thread andrewor14
http://git-wip-us.apache.org/repos/asf/spark/blob/5bf74b44/mllib/src/test/java/org/apache/spark/ml/regression/JavaLinearRegressionSuite.java -- diff --git

[04/10] spark git commit: [SPARK-15037][SQL][MLLIB] Use SparkSession instead of SQLContext in Scala/Java TestSuites

2016-05-10 Thread andrewor14
http://git-wip-us.apache.org/repos/asf/spark/blob/ed0b4070/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/json/JsonParsingOptionsSuite.scala -- diff --git

[08/10] spark git commit: [SPARK-15037][SQL][MLLIB] Use SparkSession instead of SQLContext in Scala/Java TestSuites

2016-05-10 Thread andrewor14
http://git-wip-us.apache.org/repos/asf/spark/blob/ed0b4070/mllib/src/test/java/org/apache/spark/mllib/regression/JavaIsotonicRegressionSuite.java -- diff --git

[03/10] spark git commit: [SPARK-15037][SQL][MLLIB] Use SparkSession instead of SQLContext in Scala/Java TestSuites

2016-05-10 Thread andrewor14
http://git-wip-us.apache.org/repos/asf/spark/blob/ed0b4070/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetReadBenchmark.scala -- diff --git

[06/10] spark git commit: [SPARK-15037][SQL][MLLIB] Use SparkSession instead of SQLContext in Scala/Java TestSuites

2016-05-10 Thread andrewor14
http://git-wip-us.apache.org/repos/asf/spark/blob/ed0b4070/sql/core/src/test/java/test/org/apache/spark/sql/sources/JavaDatasetAggregatorSuiteBase.java -- diff --git

[01/10] spark git commit: [SPARK-15037][SQL][MLLIB] Use SparkSession instead of SQLContext in Scala/Java TestSuites

2016-05-10 Thread andrewor14
Repository: spark Updated Branches: refs/heads/master bcfee153b -> ed0b4070f http://git-wip-us.apache.org/repos/asf/spark/blob/ed0b4070/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/AggregationQuerySuite.scala --

spark git commit: [SPARK-12837][CORE] reduce network IO for accumulators

2016-05-10 Thread andrewor14
Repository: spark Updated Branches: refs/heads/branch-2.0 af12b0a50 -> 19a9c23c2 [SPARK-12837][CORE] reduce network IO for accumulators Sending un-updated accumulators back to driver makes no sense, as merging a zero value accumulator is a no-op. We should only send back updated

spark git commit: [SPARK-12837][CORE] reduce network IO for accumulators

2016-05-10 Thread andrewor14
Repository: spark Updated Branches: refs/heads/master 0b9cae424 -> bcfee153b [SPARK-12837][CORE] reduce network IO for accumulators Sending un-updated accumulators back to driver makes no sense, as merging a zero value accumulator is a no-op. We should only send back updated accumulators,

spark git commit: [SPARK-13670][LAUNCHER] Propagate error from launcher to shell.

2016-05-10 Thread vanzin
Repository: spark Updated Branches: refs/heads/master 488863d87 -> 36c5892b4 [SPARK-13670][LAUNCHER] Propagate error from launcher to shell. bash doesn't really propagate errors from subshells when using redirection the way spark-class does; so, instead, this change captures the exit code of

spark git commit: [SPARK-11249][LAUNCHER] Throw error if app resource is not provided.

2016-05-10 Thread vanzin
Repository: spark Updated Branches: refs/heads/branch-2.0 918bf6e1b -> af12b0a50 [SPARK-11249][LAUNCHER] Throw error if app resource is not provided. Without this, the code would build an invalid spark-submit command line, and a more cryptic error would be presented to the user. Also, expose

spark git commit: [SPARK-11249][LAUNCHER] Throw error if app resource is not provided.

2016-05-10 Thread vanzin
Repository: spark Updated Branches: refs/heads/master 36c5892b4 -> 0b9cae424 [SPARK-11249][LAUNCHER] Throw error if app resource is not provided. Without this, the code would build an invalid spark-submit command line, and a more cryptic error would be presented to the user. Also, expose a

spark git commit: [SPARK-13382][DOCS][PYSPARK] Update pyspark testing notes in build docs

2016-05-10 Thread vanzin
Repository: spark Updated Branches: refs/heads/branch-2.0 1a6272e26 -> a66ebbca0 [SPARK-13382][DOCS][PYSPARK] Update pyspark testing notes in build docs ## What changes were proposed in this pull request? The current build documents don't specify that for PySpark tests we need to include

spark git commit: [SPARK-13382][DOCS][PYSPARK] Update pyspark testing notes in build docs

2016-05-10 Thread vanzin
Repository: spark Updated Branches: refs/heads/master 264626536 -> 488863d87 [SPARK-13382][DOCS][PYSPARK] Update pyspark testing notes in build docs ## What changes were proposed in this pull request? The current build documents don't specify that for PySpark tests we need to include Hive

spark git commit: [SPARK-14773] [SPARK-15179] [SQL] Fix SQL building and enable Hive tests

2016-05-10 Thread davies
Repository: spark Updated Branches: refs/heads/master 2dfb9cd1f -> 264626536 [SPARK-14773] [SPARK-15179] [SQL] Fix SQL building and enable Hive tests ## What changes were proposed in this pull request? This PR fixes SQL building for predicate subqueries and correlated scalar subqueries. It

spark git commit: [SPARK-14773] [SPARK-15179] [SQL] Fix SQL building and enable Hive tests

2016-05-10 Thread davies
Repository: spark Updated Branches: refs/heads/branch-2.0 4aa905297 -> 1a6272e26 [SPARK-14773] [SPARK-15179] [SQL] Fix SQL building and enable Hive tests ## What changes were proposed in this pull request? This PR fixes SQL building for predicate subqueries and correlated scalar subqueries.

spark git commit: [SPARK-15154] [SQL] Change key types to Long in tests

2016-05-10 Thread davies
Repository: spark Updated Branches: refs/heads/branch-2.0 841666d5d -> 4aa905297 [SPARK-15154] [SQL] Change key types to Long in tests ## What changes were proposed in this pull request? As reported in the Jira the 2 tests changed here are using a key of type Integer where the Spark sql

spark git commit: [SPARK-15154] [SQL] Change key types to Long in tests

2016-05-10 Thread davies
Repository: spark Updated Branches: refs/heads/master 8a12580d2 -> 2dfb9cd1f [SPARK-15154] [SQL] Change key types to Long in tests ## What changes were proposed in this pull request? As reported in the Jira the 2 tests changed here are using a key of type Integer where the Spark sql code

spark git commit: [SPARK-14127][SQL] "DESC ": Extracts schema information from table properties for data source tables

2016-05-10 Thread yhuai
Repository: spark Updated Branches: refs/heads/master aab99d31a -> 8a12580d2 [SPARK-14127][SQL] "DESC ": Extracts schema information from table properties for data source tables ## What changes were proposed in this pull request? This is a follow-up of #12934 and #12844. This PR adds a set

spark git commit: [SPARK-14963][YARN] Using recoveryPath if NM recovery is enabled

2016-05-10 Thread tgraves
Repository: spark Updated Branches: refs/heads/master a019e6efb -> aab99d31a [SPARK-14963][YARN] Using recoveryPath if NM recovery is enabled ## What changes were proposed in this pull request? >From Hadoop 2.5+, Yarn NM supports NM recovery which using recovery path for >auxiliary services

spark git commit: [SPARK-14542][CORE] PipeRDD should allow configurable buffer size for…

2016-05-10 Thread srowen
Repository: spark Updated Branches: refs/heads/master 570647267 -> a019e6efb [SPARK-14542][CORE] PipeRDD should allow configurable buffer size for… ## What changes were proposed in this pull request? Currently PipedRDD internally uses PrintWriter to write data to the stdin of the piped

spark git commit: [SPARK-14542][CORE] PipeRDD should allow configurable buffer size for…

2016-05-10 Thread srowen
Repository: spark Updated Branches: refs/heads/branch-2.0 58f77421b -> ff2b715e0 [SPARK-14542][CORE] PipeRDD should allow configurable buffer size for… ## What changes were proposed in this pull request? Currently PipedRDD internally uses PrintWriter to write data to the stdin of the

spark git commit: [SPARK-15215][SQL] Fix Explain Parsing and Output

2016-05-10 Thread hvanhovell
Repository: spark Updated Branches: refs/heads/branch-2.0 27bb51ca4 -> 58f77421b [SPARK-15215][SQL] Fix Explain Parsing and Output What changes were proposed in this pull request? This PR is to address a few existing issues in `EXPLAIN`: - The `EXPLAIN` options `LOGICAL | FORMATTED |

spark git commit: [SPARK-15215][SQL] Fix Explain Parsing and Output

2016-05-10 Thread hvanhovell
Repository: spark Updated Branches: refs/heads/master f45379173 -> 570647267 [SPARK-15215][SQL] Fix Explain Parsing and Output What changes were proposed in this pull request? This PR is to address a few existing issues in `EXPLAIN`: - The `EXPLAIN` options `LOGICAL | FORMATTED |