spark git commit: [SPARK-12273][STREAMING] Make Spark Streaming web UI list Receivers in order

2015-12-11 Thread zsxwing
Repository: spark Updated Branches: refs/heads/master aa305dcaf -> 713e6959d [SPARK-12273][STREAMING] Make Spark Streaming web UI list Receivers in order Currently the Streaming web UI does NOT list Receivers in order; however, it seems more convenient for the users if Receivers are listed

spark git commit: [SPARK-11497][MLLIB][PYTHON] PySpark RowMatrix Constructor Has Type Erasure Issue

2015-12-11 Thread jkbradley
Repository: spark Updated Branches: refs/heads/branch-1.6 2ddd10486 -> bfcc8cfee [SPARK-11497][MLLIB][PYTHON] PySpark RowMatrix Constructor Has Type Erasure Issue As noted in PR #9441, implementing `tallSkinnyQR` uncovered a bug with our PySpark `RowMatrix` constructor. As discussed on the

spark git commit: [SPARK-11497][MLLIB][PYTHON] PySpark RowMatrix Constructor Has Type Erasure Issue

2015-12-11 Thread jkbradley
Repository: spark Updated Branches: refs/heads/branch-1.5 5e603a51c -> e4cf12118 [SPARK-11497][MLLIB][PYTHON] PySpark RowMatrix Constructor Has Type Erasure Issue As noted in PR #9441, implementing `tallSkinnyQR` uncovered a bug with our PySpark `RowMatrix` constructor. As discussed on the

spark git commit: [SPARK-11497][MLLIB][PYTHON] PySpark RowMatrix Constructor Has Type Erasure Issue

2015-12-11 Thread jkbradley
Repository: spark Updated Branches: refs/heads/master 713e6959d -> 1b8220387 [SPARK-11497][MLLIB][PYTHON] PySpark RowMatrix Constructor Has Type Erasure Issue As noted in PR #9441, implementing `tallSkinnyQR` uncovered a bug with our PySpark `RowMatrix` constructor. As discussed on the dev

spark git commit: [SPARK-12217][ML] Document invalid handling for StringIndexer

2015-12-11 Thread jkbradley
Repository: spark Updated Branches: refs/heads/branch-1.6 bfcc8cfee -> 75531c77e [SPARK-12217][ML] Document invalid handling for StringIndexer Added a paragraph regarding StringIndexer#setHandleInvalid to the ml-features documentation. I wonder if I should also add a snippet to the code

spark git commit: [SPARK-12217][ML] Document invalid handling for StringIndexer

2015-12-11 Thread jkbradley
Repository: spark Updated Branches: refs/heads/master 1b8220387 -> aea676ca2 [SPARK-12217][ML] Document invalid handling for StringIndexer Added a paragraph regarding StringIndexer#setHandleInvalid to the ml-features documentation. I wonder if I should also add a snippet to the code

spark git commit: [SPARK-12158][SPARKR][SQL] Fix 'sample' functions that break R unit test cases

2015-12-11 Thread shivaram
Repository: spark Updated Branches: refs/heads/branch-1.6 03d801587 -> 47461fea7 [SPARK-12158][SPARKR][SQL] Fix 'sample' functions that break R unit test cases The existing sample functions miss the parameter `seed`, however, the corresponding function interface in `generics` has such a

spark git commit: [SPARK-12158][SPARKR][SQL] Fix 'sample' functions that break R unit test cases

2015-12-11 Thread shivaram
Repository: spark Updated Branches: refs/heads/master 1e799d617 -> 1e3526c2d [SPARK-12158][SPARKR][SQL] Fix 'sample' functions that break R unit test cases The existing sample functions miss the parameter `seed`, however, the corresponding function interface in `generics` has such a

spark git commit: [SPARK-11978][ML] Move dataset_example.py to examples/ml and rename to dataframe_example.py

2015-12-11 Thread jkbradley
Repository: spark Updated Branches: refs/heads/branch-1.6 75531c77e -> c2f20469d [SPARK-11978][ML] Move dataset_example.py to examples/ml and rename to dataframe_example.py Since ```Dataset``` has a new meaning in Spark 1.6, we should rename it to avoid confusion. #9873 finished the work of

spark git commit: [SPARK-12146][SPARKR] SparkR jsonFile should support multiple input files

2015-12-11 Thread shivaram
Repository: spark Updated Branches: refs/heads/master c119a34d1 -> 0fb982555 [SPARK-12146][SPARKR] SparkR jsonFile should support multiple input files * ```jsonFile``` should support multiple input files, such as: ```R jsonFile(sqlContext, c(“path1”, “path2”)) # character vector as

spark git commit: [SPARK-12146][SPARKR] SparkR jsonFile should support multiple input files

2015-12-11 Thread shivaram
Repository: spark Updated Branches: refs/heads/branch-1.6 2e4523161 -> f05bae4a3 [SPARK-12146][SPARKR] SparkR jsonFile should support multiple input files * ```jsonFile``` should support multiple input files, such as: ```R jsonFile(sqlContext, c(“path1”, “path2”)) # character vector

spark git commit: [SPARK-12258] [SQL] passing null into ScalaUDF (follow-up)

2015-12-11 Thread davies
Repository: spark Updated Branches: refs/heads/master 518ab5101 -> c119a34d1 [SPARK-12258] [SQL] passing null into ScalaUDF (follow-up) This is a follow-up PR for #10259 Author: Davies Liu Closes #10266 from davies/null_udf2. Project:

[spark] Git Push Summary

2015-12-11 Thread marmbrus
Repository: spark Updated Tags: refs/tags/v1.6.0-rc2 [deleted] 3e39925f9 - To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org

[spark] Git Push Summary

2015-12-11 Thread pwendell
Repository: spark Updated Tags: refs/tags/v1.6.0-rc2 [created] 23f8dfd45 - To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org

[2/2] spark git commit: Preparing development version 1.6.0-SNAPSHOT

2015-12-11 Thread pwendell
Preparing development version 1.6.0-SNAPSHOT Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/2e452316 Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/2e452316 Diff:

[1/2] spark git commit: Preparing Spark release v1.6.0-rc2

2015-12-11 Thread pwendell
Repository: spark Updated Branches: refs/heads/branch-1.6 eec36607f -> 2e4523161 Preparing Spark release v1.6.0-rc2 Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/23f8dfd4 Tree:

spark git commit: [SPARK-11080] [SQL] Incorporate per-JVM id into ExprId to prevent unsafe cross-JVM comparisions

2015-12-11 Thread davies
Repository: spark Updated Branches: refs/heads/branch-1.5 cb0246c93 -> 5e603a51c [SPARK-11080] [SQL] Incorporate per-JVM id into ExprId to prevent unsafe cross-JVM comparisions In the current implementation of named expressions' `ExprIds`, we rely on a per-JVM AtomicLong to ensure that

spark git commit: [SPARK-12298][SQL] Fix infinite loop in DataFrame.sortWithinPartitions

2015-12-11 Thread yhuai
Repository: spark Updated Branches: refs/heads/master a0ff6d16e -> 1e799d617 [SPARK-12298][SQL] Fix infinite loop in DataFrame.sortWithinPartitions Modifies the String overload to call the Column overload and ensures this is called in a test. Author: Ankur Dave Closes

spark git commit: [SPARK-12298][SQL] Fix infinite loop in DataFrame.sortWithinPartitions

2015-12-11 Thread yhuai
Repository: spark Updated Branches: refs/heads/branch-1.6 c2f20469d -> 03d801587 [SPARK-12298][SQL] Fix infinite loop in DataFrame.sortWithinPartitions Modifies the String overload to call the Column overload and ensures this is called in a test. Author: Ankur Dave

spark git commit: [SPARK-11964][DOCS][ML] Add in Pipeline Import/Export Documentation

2015-12-11 Thread jkbradley
Repository: spark Updated Branches: refs/heads/master 0fb982555 -> aa305dcaf [SPARK-11964][DOCS][ML] Add in Pipeline Import/Export Documentation Adding in Pipeline Import and Export Documentation. Author: anabranch Author: Bill Chambers