Repository: spark
Updated Branches:
refs/heads/branch-2.0 b52bd8070 -> a54852350
[SPARK-16294][SQL] Labelling support for the include_example Jekyll plugin
## What changes were proposed in this pull request?
This PR adds labelling support for the `include_example` Jekyll plugin, so that
we
Repository: spark
Updated Branches:
refs/heads/master d3af6731f -> bde1d6a61
[SPARK-16294][SQL] Labelling support for the include_example Jekyll plugin
## What changes were proposed in this pull request?
This PR adds labelling support for the `include_example` Jekyll plugin, so that
we may
Repository: spark
Updated Branches:
refs/heads/master 831a04f5d -> d3af6731f
[SPARK-16274][SQL] Implement xpath_boolean
## What changes were proposed in this pull request?
This patch implements xpath_boolean expression for Spark SQL, a xpath function
that returns true or false. The
Repository: spark
Updated Branches:
refs/heads/branch-2.0 e1bdf1e02 -> b52bd8070
[SPARK-16267][TEST] Replace deprecated `CREATE TEMPORARY TABLE ... USING` from
testsuites.
## What changes were proposed in this pull request?
After SPARK-15674, `DDLStrategy` prints out the following
Repository: spark
Updated Branches:
refs/heads/master d063898be -> 831a04f5d
[SPARK-16267][TEST] Replace deprecated `CREATE TEMPORARY TABLE ... USING` from
testsuites.
## What changes were proposed in this pull request?
After SPARK-15674, `DDLStrategy` prints out the following deprecation
Repository: spark
Updated Branches:
refs/heads/branch-2.0 8da431473 -> e1bdf1e02
Revert "[SPARK-16134][SQL] optimizer rules for typed filter"
This reverts commit 8da4314735ed55f259642e2977d8d7bf2212474f.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit:
Repository: spark
Updated Branches:
refs/heads/branch-2.0 011befd20 -> 8da431473
[SPARK-16134][SQL] optimizer rules for typed filter
## What changes were proposed in this pull request?
This PR adds 3 optimizer rules for typed filter:
1. push typed filter down through `SerializeFromObject`
Repository: spark
Updated Branches:
refs/heads/master 2eaabfa41 -> d063898be
[SPARK-16134][SQL] optimizer rules for typed filter
## What changes were proposed in this pull request?
This PR adds 3 optimizer rules for typed filter:
1. push typed filter down through `SerializeFromObject` and
Repository: spark
Updated Branches:
refs/heads/branch-2.0 c4cebd572 -> 011befd20
[SPARK-16228][SQL] HiveSessionCatalog should return `double`-param functions
for decimal param lookups
## What changes were proposed in this pull request?
This PR supports a fallback lookup by casting
Repository: spark
Updated Branches:
refs/heads/master 9b1b3ae77 -> 23c58653f
[SPARK-16238] Metrics for generated method and class bytecode size
## What changes were proposed in this pull request?
This extends SPARK-15860 to include metrics for the actual bytecode size of
janino-generated
Repository: spark
Updated Branches:
refs/heads/branch-2.0 ef0253ff6 -> c4cebd572
[SPARK-16238] Metrics for generated method and class bytecode size
## What changes were proposed in this pull request?
This extends SPARK-15860 to include metrics for the actual bytecode size of
Repository: spark
Updated Branches:
refs/heads/branch-2.0 a7f66ef62 -> ef0253ff6
[SPARK-16006][SQL] Attemping to write empty DataFrame with no fields throws
non-intuitive exception
## What changes were proposed in this pull request?
This PR allows `emptyDataFrame.write` since the user
Repository: spark
Updated Branches:
refs/heads/master 8b5a8b25b -> 9b1b3ae77
[SPARK-16006][SQL] Attemping to write empty DataFrame with no fields throws
non-intuitive exception
## What changes were proposed in this pull request?
This PR allows `emptyDataFrame.write` since the user didn't
Repository: spark
Updated Branches:
refs/heads/branch-2.0 809af6d9d -> a7f66ef62
[SPARK-16301] [SQL] The analyzer rule for resolving using joins should respect
the case sensitivity setting.
## What changes were proposed in this pull request?
The analyzer rule for resolving using joins should
Repository: spark
Updated Branches:
refs/heads/master d8a87a3ed -> 8b5a8b25b
[SPARK-16301] [SQL] The analyzer rule for resolving using joins should respect
the case sensitivity setting.
## What changes were proposed in this pull request?
The analyzer rule for resolving using joins should
Repository: spark
Updated Branches:
refs/heads/branch-2.0 3cc258efb -> 809af6d9d
[TRIVIAL] [PYSPARK] Clean up orc compression option as well
## What changes were proposed in this pull request?
This PR corrects ORC compression option for PySpark as well. I think this was
missed mistakenly in
Repository: spark
Updated Branches:
refs/heads/master 64132a14f -> d8a87a3ed
[TRIVIAL] [PYSPARK] Clean up orc compression option as well
## What changes were proposed in this pull request?
This PR corrects ORC compression option for PySpark as well. I think this was
missed mistakenly in
Repository: spark
Updated Branches:
refs/heads/branch-1.6 0cb06c993 -> 1ac830aca
[SPARK-16044][SQL] Backport input_file_name() for data source based on
NewHadoopRDD to branch 1.6
## What changes were proposed in this pull request?
This PR backports
Repository: spark
Updated Branches:
refs/heads/branch-2.0 edd1905c0 -> 3cc258efb
[SPARK-16256][SQL][STREAMING] Added Structured Streaming Programming Guide
Title defines all.
Author: Tathagata Das
Closes #13945 from tdas/SPARK-16256.
(cherry picked from commit
Repository: spark
Updated Branches:
refs/heads/master cb1b9d34f -> 64132a14f
[SPARK-16256][SQL][STREAMING] Added Structured Streaming Programming Guide
Title defines all.
Author: Tathagata Das
Closes #13945 from tdas/SPARK-16256.
Project:
Repository: spark
Updated Branches:
refs/heads/master 39f2eb1da -> cb1b9d34f
[SPARK-14480][SQL] Remove meaningless StringIteratorReader for CSV data source.
## What changes were proposed in this pull request?
This PR removes meaningless `StringIteratorReader` for CSV data source.
In
Repository: spark
Updated Branches:
refs/heads/master 8c9cd0a7a -> 39f2eb1da
[SPARK-16236][SQL][FOLLOWUP] Add Path Option back to Load API in DataFrameReader
What changes were proposed in this pull request?
In Python API, we have the same issue. Thanks for identifying this issue,
Repository: spark
Updated Branches:
refs/heads/branch-2.0 1cde325e2 -> edd1905c0
[SPARK-16236][SQL][FOLLOWUP] Add Path Option back to Load API in DataFrameReader
What changes were proposed in this pull request?
In Python API, we have the same issue. Thanks for identifying this issue,
Repository: spark
Updated Branches:
refs/heads/branch-2.0 d96e8c2dd -> 1cde325e2
[SPARK-16140][MLLIB][SPARKR][DOCS] Group k-means method in generated R doc
https://issues.apache.org/jira/browse/SPARK-16140
## What changes were proposed in this pull request?
Group the R doc of spark.kmeans,
Repository: spark
Updated Branches:
refs/heads/master c6a220d75 -> 8c9cd0a7a
[SPARK-16140][MLLIB][SPARKR][DOCS] Group k-means method in generated R doc
https://issues.apache.org/jira/browse/SPARK-16140
## What changes were proposed in this pull request?
Group the R doc of spark.kmeans,
Repository: spark
Updated Branches:
refs/heads/branch-2.0 ba71cf451 -> d96e8c2dd
[MINOR][SPARKR] Fix arguments of survreg in SparkR
## What changes were proposed in this pull request?
Fix wrong arguments description of ```survreg``` in SparkR.
## How was this patch tested?
```Arguments```
Repository: spark
Updated Branches:
refs/heads/master 393db655c -> 272a2f78f
[SPARK-15990][YARN] Add rolling log aggregation support for Spark on yarn
## What changes were proposed in this pull request?
Yarn supports rolling log aggregation since 2.6, previously log will only be
aggregated
Repository: spark
Updated Branches:
refs/heads/master 21385d02a -> 393db655c
[SPARK-15858][ML] Fix calculating error by tree stack over flow probâ¦
## What changes were proposed in this pull request?
What changes were proposed in this pull request?
Improving evaluateEachIteration function
Repository: spark
Updated Branches:
refs/heads/branch-2.0 1b4d63f6f -> ba71cf451
[SPARK-16261][EXAMPLES][ML] Fixed incorrect appNames in ML Examples
## What changes were proposed in this pull request?
Some appNames in ML examples are incorrect, mostly in PySpark but one in Scala.
This
Repository: spark
Updated Branches:
refs/heads/master 7ee9e39cb -> 21385d02a
[SPARK-16261][EXAMPLES][ML] Fixed incorrect appNames in ML Examples
## What changes were proposed in this pull request?
Some appNames in ML examples are incorrect, mostly in PySpark but one in Scala.
This corrects
Repository: spark
Updated Branches:
refs/heads/master d1e810885 -> 7ee9e39cb
[SPARK-16157][SQL] Add New Methods for comments in StructField and StructType
What changes were proposed in this pull request?
Based on the previous discussion with cloud-fan hvanhovell in another related
PR
Repository: spark
Updated Branches:
refs/heads/branch-2.0 904122335 -> 1b4d63f6f
[SPARK-16291][SQL] CheckAnalysis should capture nested aggregate functions that
reference no input attributes
## What changes were proposed in this pull request?
`MAX(COUNT(*))` is invalid since aggregate
Repository: spark
Updated Branches:
refs/heads/master 757dc2c09 -> d1e810885
[SPARK-16291][SQL] CheckAnalysis should capture nested aggregate functions that
reference no input attributes
## What changes were proposed in this pull request?
`MAX(COUNT(*))` is invalid since aggregate
Repository: spark
Updated Branches:
refs/heads/master f454a7f9f -> 757dc2c09
[TRIVIAL][DOCS][STREAMING][SQL] The return type mentioned in the Javadoc is
incorrect for toJavaRDD, â¦
## What changes were proposed in this pull request?
Change the return type mentioned in the JavaDoc for
Repository: spark
Updated Branches:
refs/heads/branch-2.0 6650c0533 -> 904122335
[TRIVIAL][DOCS][STREAMING][SQL] The return type mentioned in the Javadoc is
incorrect for toJavaRDD, â¦
## What changes were proposed in this pull request?
Change the return type mentioned in the JavaDoc for
Repository: spark
Updated Branches:
refs/heads/branch-2.0 22b4072e7 -> 6650c0533
[SPARK-16259][PYSPARK] cleanup options in DataFrame read/write API
## What changes were proposed in this pull request?
There are some duplicated code for options in DataFrame reader/writer API, this
PR clean
[SPARK-16266][SQL][STREAING] Moved DataStreamReader/Writer from pyspark.sql to
pyspark.sql.streaming
## What changes were proposed in this pull request?
- Moved DataStreamReader/Writer from pyspark.sql to pyspark.sql.streaming to
make them consistent with scala packaging
- Exposed the
37 matches
Mail list logo