Repository: spark
Updated Branches:
refs/heads/master 9f0a642f8 -> ba181c0c7
[SPARK-15235][WEBUI] Corresponding row cannot be highlighted even though cursor
is on the job on Web UI's timeline
## What changes were proposed in this pull request?
To extract job descriptions and stage name,
Repository: spark
Updated Branches:
refs/heads/branch-2.0 d9288b804 -> ca5ce5365
[SPARK-15235][WEBUI] Corresponding row cannot be highlighted even though cursor
is on the job on Web UI's timeline
## What changes were proposed in this pull request?
To extract job descriptions and stage name,
Repository: spark
Updated Branches:
refs/heads/master 1fbe2785d -> 9f0a642f8
[SPARK-15246][SPARK-4452][CORE] Fix code style and improve volatile for
## What changes were proposed in this pull request?
1. Fix code style
2. remove volatile of elementsRead method because there is only one thread
Repository: spark
Updated Branches:
refs/heads/branch-2.0 1b446a461 -> d9288b804
[SPARK-15246][SPARK-4452][CORE] Fix code style and improve volatile for
## What changes were proposed in this pull request?
1. Fix code style
2. remove volatile of elementsRead method because there is only one
Repository: spark
Updated Branches:
refs/heads/master 665545960 -> 1fbe2785d
[SPARK-15255][SQL] limit the length of name for cached DataFrame
## What changes were proposed in this pull request?
We use the tree string of an SparkPlan as the name of cached DataFrame, that
could be very long,
Repository: spark
Updated Branches:
refs/heads/branch-2.0 a675f5e1d -> 1b446a461
[SPARK-15255][SQL] limit the length of name for cached DataFrame
## What changes were proposed in this pull request?
We use the tree string of an SparkPlan as the name of cached DataFrame, that
could be very
Repository: spark
Updated Branches:
refs/heads/master 3ff012051 -> 665545960
[SPARK-15265][SQL][MINOR] Fix Union query error message indentation
## What changes were proposed in this pull request?
This issue fixes the error message indentation consistently with other set
queries
Repository: spark
Updated Branches:
refs/heads/branch-2.0 0ecc105d2 -> a675f5e1d
[SPARK-15265][SQL][MINOR] Fix Union query error message indentation
## What changes were proposed in this pull request?
This issue fixes the error message indentation consistently with other set
queries
Repository: spark
Updated Branches:
refs/heads/master 5a5b83c97 -> 3ff012051
[SPARK-15250][SQL] Remove deprecated json API in DataFrameReader
## What changes were proposed in this pull request?
This PR removes the old `json(path: String)` API which is covered by the new
`json(paths:
Repository: spark
Updated Branches:
refs/heads/branch-2.0 03dfe7830 -> 0ecc105d2
[SPARK-15250][SQL] Remove deprecated json API in DataFrameReader
## What changes were proposed in this pull request?
This PR removes the old `json(path: String)` API which is covered by the new
`json(paths:
Repository: spark
Updated Branches:
refs/heads/branch-2.0 5e3192a9a -> 03dfe7830
[SPARK-15261][SQL] Remove experimental tag from DataFrameReader/Writer
## What changes were proposed in this pull request?
This patch removes experimental tag from DataFrameReader and DataFrameWriter,
and
Repository: spark
Updated Branches:
refs/heads/master 61e0bdcff -> 5a5b83c97
[SPARK-15261][SQL] Remove experimental tag from DataFrameReader/Writer
## What changes were proposed in this pull request?
This patch removes experimental tag from DataFrameReader and DataFrameWriter,
and explicitly
Repository: spark
Updated Branches:
refs/heads/branch-2.0 d8c2da9a4 -> 5e3192a9a
[SPARK-14476][SQL] Improve the physical plan visualization by adding meta info
like table name and file path for data source.
## What changes were proposed in this pull request?
Improve the physical plan
Repository: spark
Updated Branches:
refs/heads/master 86475520f -> d9ca9fd3e
[SPARK-14837][SQL][STREAMING] Added support in file stream source for reading
new files added to subdirs
## What changes were proposed in this pull request?
Currently, file stream source can only find new files if
Repository: spark
Updated Branches:
refs/heads/branch-2.0 f021f3460 -> d8c2da9a4
[SPARK-14837][SQL][STREAMING] Added support in file stream source for reading
new files added to subdirs
## What changes were proposed in this pull request?
Currently, file stream source can only find new files
Repository: spark
Updated Branches:
refs/heads/branch-2.0 1db027d11 -> f021f3460
[SPARK-14936][BUILD][TESTS] FlumePollingStreamSuite is slow
https://issues.apache.org/jira/browse/SPARK-14936
## What changes were proposed in this pull request?
FlumePollingStreamSuite contains two tests which
Repository: spark
Updated Branches:
refs/heads/master da02d006b -> 86475520f
[SPARK-14936][BUILD][TESTS] FlumePollingStreamSuite is slow
https://issues.apache.org/jira/browse/SPARK-14936
## What changes were proposed in this pull request?
FlumePollingStreamSuite contains two tests which run
Repository: spark
Updated Branches:
refs/heads/master 9533f5390 -> da02d006b
[SPARK-15249][SQL] Use FunctionResource instead of (String, String) in
CreateFunction and CatalogFunction for resource
Use FunctionResource instead of (String, String) in CreateFunction and
CatalogFunction for
Repository: spark
Updated Branches:
refs/heads/master 603c4f8eb -> 9533f5390
[SPARK-6005][TESTS] Fix flaky test:
o.a.s.streaming.kafka.DirectKafkaStreamSuite.offset recovery
## What changes were proposed in this pull request?
Because this test extracts data from `DStream.generatedRDDs`
Repository: spark
Updated Branches:
refs/heads/branch-2.0 0ab195886 -> 95f254994
[SPARK-6005][TESTS] Fix flaky test:
o.a.s.streaming.kafka.DirectKafkaStreamSuite.offset recovery
## What changes were proposed in this pull request?
Because this test extracts data from `DStream.generatedRDDs`
Repository: spark
Updated Branches:
refs/heads/master d28c67544 -> 603c4f8eb
[SPARK-15207][BUILD] Use Travis CI for Java Linter and JDK7/8 compilation test
## What changes were proposed in this pull request?
Currently, Java Linter is disabled in Jenkins tests.
Repository: spark
Updated Branches:
refs/heads/branch-2.0 5a4a188fe -> 0ab195886
[SPARK-14986][SQL] Return correct result for empty LATERAL VIEW OUTER
## What changes were proposed in this pull request?
A Generate with the `outer` flag enabled should always return one or more rows
for every
Repository: spark
Updated Branches:
refs/heads/master 89f73f674 -> d28c67544
[SPARK-14986][SQL] Return correct result for empty LATERAL VIEW OUTER
## What changes were proposed in this pull request?
A Generate with the `outer` flag enabled should always return one or more rows
for every
Repository: spark
Updated Branches:
refs/heads/branch-2.0 82f69594f -> 5a4a188fe
[SPARK-14642][SQL] import org.apache.spark.sql.expressions._ breaks udf under
functions
## What changes were proposed in this pull request?
PR fixes the import issue which breaks udf functions.
The following
Repository: spark
Updated Branches:
refs/heads/master 93353b011 -> 89f73f674
[SPARK-14642][SQL] import org.apache.spark.sql.expressions._ breaks udf under
functions
## What changes were proposed in this pull request?
PR fixes the import issue which breaks udf functions.
The following code
Repository: spark
Updated Branches:
refs/heads/branch-2.0 a432e80b8 -> 82f69594f
[SPARK-15195][PYSPARK][DOCS] Update ml.tuning PyDocs
## What changes were proposed in this pull request?
Tag classes in ml.tuning as experimental, add docs for kfolds avg metric, and
copy TrainValidationSplit
Repository: spark
Updated Branches:
refs/heads/master 69641066a -> 93353b011
[SPARK-15195][PYSPARK][DOCS] Update ml.tuning PyDocs
## What changes were proposed in this pull request?
Tag classes in ml.tuning as experimental, add docs for kfolds avg metric, and
copy TrainValidationSplit
Repository: spark
Updated Branches:
refs/heads/master db3b4a201 -> 69641066a
[SPARK-15037][HOTFIX] Don't create 2 SparkSessions in constructor
## What changes were proposed in this pull request?
After #12907 `TestSparkSession` creates a spark session in one of the
constructors just to get
Repository: spark
Updated Branches:
refs/heads/master cddb9da07 -> db3b4a201
[SPARK-15037][HOTFIX] Replace `sqlContext` and `sparkSession` with `spark`.
This replaces `sparkSession` with `spark` in CatalogSuite.scala.
Pass the Jenkins tests.
Author: Dongjoon Hyun
Repository: spark
Updated Branches:
refs/heads/branch-2.0 42db140c5 -> bd7fd14c9
[SPARK-15037][HOTFIX] Replace `sqlContext` and `sparkSession` with `spark`.
This replaces `sparkSession` with `spark` in CatalogSuite.scala.
Pass the Jenkins tests.
Author: Dongjoon Hyun
Repository: spark
Updated Branches:
refs/heads/master 5c6b08557 -> cddb9da07
[HOTFIX] SQL test compilation error from merge conflict
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/cddb9da0
Tree:
Repository: spark
Updated Branches:
refs/heads/branch-2.0 5bf74b44d -> 42db140c5
[SPARK-14603][SQL] Verification of Metadata Operations by Session Catalog
Since we cannot really trust if the underlying external catalog can throw
exceptions when there is an invalid metadata operation, let's
Repository: spark
Updated Branches:
refs/heads/master ed0b4070f -> 5c6b08557
[SPARK-14603][SQL] Verification of Metadata Operations by Session Catalog
Since we cannot really trust if the underlying external catalog can throw
exceptions when there is an invalid metadata operation, let's do it
http://git-wip-us.apache.org/repos/asf/spark/blob/ed0b4070/mllib/src/test/scala/org/apache/spark/ml/feature/StopWordsRemoverSuite.scala
--
diff --git
a/mllib/src/test/scala/org/apache/spark/ml/feature/StopWordsRemoverSuite.scala
http://git-wip-us.apache.org/repos/asf/spark/blob/5bf74b44/mllib/src/test/java/org/apache/spark/mllib/regression/JavaIsotonicRegressionSuite.java
--
diff --git
http://git-wip-us.apache.org/repos/asf/spark/blob/ed0b4070/sql/core/src/test/scala/org/apache/spark/sql/sources/PartitionedWriteSuite.scala
--
diff --git
http://git-wip-us.apache.org/repos/asf/spark/blob/5bf74b44/sql/core/src/test/scala/org/apache/spark/sql/sources/PartitionedWriteSuite.scala
--
diff --git
http://git-wip-us.apache.org/repos/asf/spark/blob/5bf74b44/sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
--
diff --git a/sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
[SPARK-15037][SQL][MLLIB] Use SparkSession instead of SQLContext in Scala/Java
TestSuites
## What changes were proposed in this pull request?
Use SparkSession instead of SQLContext in Scala/Java TestSuites
as this PR already very big working Python TestSuites in a diff PR.
## How was this patch
http://git-wip-us.apache.org/repos/asf/spark/blob/5bf74b44/sql/core/src/test/java/test/org/apache/spark/sql/sources/JavaDatasetAggregatorSuiteBase.java
--
diff --git
http://git-wip-us.apache.org/repos/asf/spark/blob/5bf74b44/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/json/JsonParsingOptionsSuite.scala
--
diff --git
http://git-wip-us.apache.org/repos/asf/spark/blob/5bf74b44/mllib/src/test/scala/org/apache/spark/ml/feature/StopWordsRemoverSuite.scala
--
diff --git
a/mllib/src/test/scala/org/apache/spark/ml/feature/StopWordsRemoverSuite.scala
Repository: spark
Updated Branches:
refs/heads/branch-2.0 19a9c23c2 -> 5bf74b44d
http://git-wip-us.apache.org/repos/asf/spark/blob/5bf74b44/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/AggregationQuerySuite.scala
http://git-wip-us.apache.org/repos/asf/spark/blob/5bf74b44/mllib/src/test/java/org/apache/spark/ml/regression/JavaLinearRegressionSuite.java
--
diff --git
http://git-wip-us.apache.org/repos/asf/spark/blob/ed0b4070/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/json/JsonParsingOptionsSuite.scala
--
diff --git
http://git-wip-us.apache.org/repos/asf/spark/blob/ed0b4070/mllib/src/test/java/org/apache/spark/mllib/regression/JavaIsotonicRegressionSuite.java
--
diff --git
http://git-wip-us.apache.org/repos/asf/spark/blob/ed0b4070/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetReadBenchmark.scala
--
diff --git
http://git-wip-us.apache.org/repos/asf/spark/blob/ed0b4070/sql/core/src/test/java/test/org/apache/spark/sql/sources/JavaDatasetAggregatorSuiteBase.java
--
diff --git
Repository: spark
Updated Branches:
refs/heads/master bcfee153b -> ed0b4070f
http://git-wip-us.apache.org/repos/asf/spark/blob/ed0b4070/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/AggregationQuerySuite.scala
--
Repository: spark
Updated Branches:
refs/heads/branch-2.0 af12b0a50 -> 19a9c23c2
[SPARK-12837][CORE] reduce network IO for accumulators
Sending un-updated accumulators back to driver makes no sense, as merging a
zero value accumulator is a no-op. We should only send back updated
Repository: spark
Updated Branches:
refs/heads/master 0b9cae424 -> bcfee153b
[SPARK-12837][CORE] reduce network IO for accumulators
Sending un-updated accumulators back to driver makes no sense, as merging a
zero value accumulator is a no-op. We should only send back updated
accumulators,
Repository: spark
Updated Branches:
refs/heads/master 488863d87 -> 36c5892b4
[SPARK-13670][LAUNCHER] Propagate error from launcher to shell.
bash doesn't really propagate errors from subshells when using redirection
the way spark-class does; so, instead, this change captures the exit code
of
Repository: spark
Updated Branches:
refs/heads/branch-2.0 918bf6e1b -> af12b0a50
[SPARK-11249][LAUNCHER] Throw error if app resource is not provided.
Without this, the code would build an invalid spark-submit command line,
and a more cryptic error would be presented to the user. Also, expose
Repository: spark
Updated Branches:
refs/heads/master 36c5892b4 -> 0b9cae424
[SPARK-11249][LAUNCHER] Throw error if app resource is not provided.
Without this, the code would build an invalid spark-submit command line,
and a more cryptic error would be presented to the user. Also, expose
a
Repository: spark
Updated Branches:
refs/heads/branch-2.0 1a6272e26 -> a66ebbca0
[SPARK-13382][DOCS][PYSPARK] Update pyspark testing notes in build docs
## What changes were proposed in this pull request?
The current build documents don't specify that for PySpark tests we need to
include
Repository: spark
Updated Branches:
refs/heads/master 264626536 -> 488863d87
[SPARK-13382][DOCS][PYSPARK] Update pyspark testing notes in build docs
## What changes were proposed in this pull request?
The current build documents don't specify that for PySpark tests we need to
include Hive
Repository: spark
Updated Branches:
refs/heads/master 2dfb9cd1f -> 264626536
[SPARK-14773] [SPARK-15179] [SQL] Fix SQL building and enable Hive tests
## What changes were proposed in this pull request?
This PR fixes SQL building for predicate subqueries and correlated scalar
subqueries. It
Repository: spark
Updated Branches:
refs/heads/branch-2.0 4aa905297 -> 1a6272e26
[SPARK-14773] [SPARK-15179] [SQL] Fix SQL building and enable Hive tests
## What changes were proposed in this pull request?
This PR fixes SQL building for predicate subqueries and correlated scalar
subqueries.
Repository: spark
Updated Branches:
refs/heads/branch-2.0 841666d5d -> 4aa905297
[SPARK-15154] [SQL] Change key types to Long in tests
## What changes were proposed in this pull request?
As reported in the Jira the 2 tests changed here are using a key of type
Integer where the Spark sql
Repository: spark
Updated Branches:
refs/heads/master 8a12580d2 -> 2dfb9cd1f
[SPARK-15154] [SQL] Change key types to Long in tests
## What changes were proposed in this pull request?
As reported in the Jira the 2 tests changed here are using a key of type
Integer where the Spark sql code
Repository: spark
Updated Branches:
refs/heads/master aab99d31a -> 8a12580d2
[SPARK-14127][SQL] "DESC ": Extracts schema information from table
properties for data source tables
## What changes were proposed in this pull request?
This is a follow-up of #12934 and #12844. This PR adds a set
Repository: spark
Updated Branches:
refs/heads/master a019e6efb -> aab99d31a
[SPARK-14963][YARN] Using recoveryPath if NM recovery is enabled
## What changes were proposed in this pull request?
>From Hadoop 2.5+, Yarn NM supports NM recovery which using recovery path for
>auxiliary services
Repository: spark
Updated Branches:
refs/heads/master 570647267 -> a019e6efb
[SPARK-14542][CORE] PipeRDD should allow configurable buffer size forâ¦
## What changes were proposed in this pull request?
Currently PipedRDD internally uses PrintWriter to write data to the stdin of
the piped
Repository: spark
Updated Branches:
refs/heads/branch-2.0 58f77421b -> ff2b715e0
[SPARK-14542][CORE] PipeRDD should allow configurable buffer size forâ¦
## What changes were proposed in this pull request?
Currently PipedRDD internally uses PrintWriter to write data to the stdin of
the
Repository: spark
Updated Branches:
refs/heads/branch-2.0 27bb51ca4 -> 58f77421b
[SPARK-15215][SQL] Fix Explain Parsing and Output
What changes were proposed in this pull request?
This PR is to address a few existing issues in `EXPLAIN`:
- The `EXPLAIN` options `LOGICAL | FORMATTED |
Repository: spark
Updated Branches:
refs/heads/master f45379173 -> 570647267
[SPARK-15215][SQL] Fix Explain Parsing and Output
What changes were proposed in this pull request?
This PR is to address a few existing issues in `EXPLAIN`:
- The `EXPLAIN` options `LOGICAL | FORMATTED |
66 matches
Mail list logo