Repository: spark
Updated Branches:
refs/heads/branch-1.6 aff44f9a8 -> c3da2bd46
[SPARK-11706][STREAMING] Fix the bug that Streaming Python tests cannot report
failures
This PR just checks the test results and returns 1 if the test fails, so that
`run-tests.py` can mark it fail.
Author:
Repository: spark
Updated Branches:
refs/heads/master ad960885b -> ec80c0c2f
[SPARK-11706][STREAMING] Fix the bug that Streaming Python tests cannot report
failures
This PR just checks the test results and returns 1 if the test fails, so that
`run-tests.py` can mark it fail.
Author:
Repository: spark
Updated Branches:
refs/heads/branch-1.6 98c614d66 -> 4a1bcb26d
[SPARK-11723][ML][DOC] Use LibSVM data source rather than
MLUtils.loadLibSVMFile to load DataFrame
Use LibSVM data source rather than MLUtils.loadLibSVMFile to load DataFrame,
include:
* Use libSVM data source
Repository: spark
Updated Branches:
refs/heads/master 61a28486c -> 99693fef0
[SPARK-11723][ML][DOC] Use LibSVM data source rather than
MLUtils.loadLibSVMFile to load DataFrame
Use LibSVM data source rather than MLUtils.loadLibSVMFile to load DataFrame,
include:
* Use libSVM data source for
Repository: spark
Updated Branches:
refs/heads/branch-1.6 9fa9ad0e2 -> 98c614d66
[SPARK-11445][DOCS] Replaced example code in mllib-ensembles.md using
include_example
I have made the required changes and tested.
Kindly review the changes.
Author: Rishabh Bhardwaj
Repository: spark
Updated Branches:
refs/heads/master 7b5d9051c -> 61a28486c
[SPARK-11445][DOCS] Replaced example code in mllib-ensembles.md using
include_example
I have made the required changes and tested.
Kindly review the changes.
Author: Rishabh Bhardwaj
Closes
Repository: spark
Updated Branches:
refs/heads/branch-1.6 6459a6747 -> 3035e9d23
[SPARK-11654][SQL][FOLLOW-UP] fix some mistakes and clean up
* rename `AppendColumn` to `AppendColumns` to be consistent with the physical
plan name.
* clean up stale comments.
* always pass in resolved encoder
Repository: spark
Updated Branches:
refs/heads/branch-1.6 3035e9d23 -> 8757221a3
[SPARK-11727][SQL] Split ExpressionEncoder into FlatEncoder and ProductEncoder
also add more tests for encoders, and fix bugs that I found:
* when convert array to catalyst array, we can only skip element
Repository: spark
Updated Branches:
refs/heads/master 99693fef0 -> a24477996
[SPARK-11690][PYSPARK] Add pivot to python api
This PR adds pivot to the python api of GroupedData with the same syntax as
Scala/Java.
Author: Andrew Ray
Closes #9653 from
Repository: spark
Updated Branches:
refs/heads/branch-1.6 4a1bcb26d -> 6459a6747
[SPARK-11690][PYSPARK] Add pivot to python api
This PR adds pivot to the python api of GroupedData with the same syntax as
Scala/Java.
Author: Andrew Ray
Closes #9653 from
Repository: spark
Updated Branches:
refs/heads/master a24477996 -> 23b8188f7
[SPARK-11654][SQL][FOLLOW-UP] fix some mistakes and clean up
* rename `AppendColumn` to `AppendColumns` to be consistent with the physical
plan name.
* clean up stale comments.
* always pass in resolved encoder to
Repository: spark
Updated Branches:
refs/heads/master d7b2b97ad -> 2d2411faa
[SPARK-11672][ML] Set active SQLContext in MLlibTestSparkContext.beforeAll
Still saw some error messages caused by `SQLContext.getOrCreate`:
Repository: spark
Updated Branches:
refs/heads/branch-1.6 8757221a3 -> 58ab347d1
[SPARK-11672][ML] Set active SQLContext in MLlibTestSparkContext.beforeAll
Still saw some error messages caused by `SQLContext.getOrCreate`:
Repository: spark
Updated Branches:
refs/heads/master 2d2411faa -> 912b94363
[SPARK-11336] Add links to example codes
https://issues.apache.org/jira/browse/SPARK-11336
mengxr I add a hyperlink of Spark on Github and a hint of their existences in
Spark code repo in each code example. I
Repository: spark
Updated Branches:
refs/heads/branch-1.6 58ab347d1 -> ffd23baeb
[SPARK-11336] Add links to example codes
https://issues.apache.org/jira/browse/SPARK-11336
mengxr I add a hyperlink of Spark on Github and a hint of their existences in
Spark code repo in each code example. I
Repository: spark
Updated Branches:
refs/heads/master 912b94363 -> bdfbc1dca
[MINOR][ML] remove MLlibTestsSparkContext from ImpuritySuite
ImpuritySuite doesn't need SparkContext.
Author: Xiangrui Meng
Closes #9698 from
Repository: spark
Updated Branches:
refs/heads/branch-1.5 3676d4c4d -> 330961bbf
[SPARK-8029] robust shuffle writer (for 1.5 branch)
Currently, all the shuffle writer will write to target path directly, the file
could be corrupted by other attempt of the same partition on the same executor.
Repository: spark
Updated Branches:
refs/heads/branch-1.6 ffd23baeb -> 6efe8b583
[MINOR][ML] remove MLlibTestsSparkContext from ImpuritySuite
ImpuritySuite doesn't need SparkContext.
Author: Xiangrui Meng
Closes #9698 from
Repository: spark
Updated Branches:
refs/heads/branch-1.6 6efe8b583 -> bcc871091
[SPARK-7970] Skip closure cleaning for SQL operations
Also introduces new spark private API in RDD.scala with name
'mapPartitionsInternal' which doesn't closure cleans the RDD elements.
Author: nitin goyal
Repository: spark
Updated Branches:
refs/heads/master bdfbc1dca -> c939c70ac
[SPARK-7970] Skip closure cleaning for SQL operations
Also introduces new spark private API in RDD.scala with name
'mapPartitionsInternal' which doesn't closure cleans the RDD elements.
Author: nitin goyal
Repository: spark
Updated Branches:
refs/heads/master ec80c0c2f -> 7b5d9051c
[SPARK-11678][SQL] Partition discovery should stop at the root path of the
table.
https://issues.apache.org/jira/browse/SPARK-11678
The change of this PR is to pass root paths of table to the partition discovery
Repository: spark
Updated Branches:
refs/heads/branch-1.6 c3da2bd46 -> 9fa9ad0e2
[SPARK-11678][SQL] Partition discovery should stop at the root path of the
table.
https://issues.apache.org/jira/browse/SPARK-11678
The change of this PR is to pass root paths of table to the partition
22 matches
Mail list logo