spark git commit: [SPARK-11849][SQL] Analyzer should replace current_date and current_timestamp with literals

2015-11-19 Thread rxin
Repository: spark Updated Branches: refs/heads/master 1a93323c5 -> f44999200 [SPARK-11849][SQL] Analyzer should replace current_date and current_timestamp with literals We currently rely on the optimizer's constant folding to replace current_timestamp and current_date. However, this can stil

spark git commit: [SPARK-11849][SQL] Analyzer should replace current_date and current_timestamp with literals

2015-11-19 Thread rxin
Repository: spark Updated Branches: refs/heads/branch-1.6 eb1ba1e2e -> 0c970fd2c [SPARK-11849][SQL] Analyzer should replace current_date and current_timestamp with literals We currently rely on the optimizer's constant folding to replace current_timestamp and current_date. However, this can

spark git commit: [SPARK-11840][SQL] Restore the 1.5's behavior of planning a single distinct aggregation.

2015-11-19 Thread yhuai
Repository: spark Updated Branches: refs/heads/master f44999200 -> 962878843 [SPARK-11840][SQL] Restore the 1.5's behavior of planning a single distinct aggregation. The impact of this change is for a query that has a single distinct column and does not have any grouping expression like `SEL

spark git commit: [SPARK-11840][SQL] Restore the 1.5's behavior of planning a single distinct aggregation.

2015-11-19 Thread yhuai
Repository: spark Updated Branches: refs/heads/branch-1.6 0c970fd2c -> e12ab57a1 [SPARK-11840][SQL] Restore the 1.5's behavior of planning a single distinct aggregation. The impact of this change is for a query that has a single distinct column and does not have any grouping expression like

spark git commit: [SPARK-11830][CORE] Make NettyRpcEnv bind to the specified host

2015-11-19 Thread rxin
Repository: spark Updated Branches: refs/heads/branch-1.6 e12ab57a1 -> 9b5dc5c48 [SPARK-11830][CORE] Make NettyRpcEnv bind to the specified host This PR includes the following change: 1. Bind NettyRpcEnv to the specified host 2. Fix the port information in the log for NettyRpcEnv. 3. Fix the

spark git commit: [SPARK-11830][CORE] Make NettyRpcEnv bind to the specified host

2015-11-19 Thread rxin
Repository: spark Updated Branches: refs/heads/master 962878843 -> 72d150c27 [SPARK-11830][CORE] Make NettyRpcEnv bind to the specified host This PR includes the following change: 1. Bind NettyRpcEnv to the specified host 2. Fix the port information in the log for NettyRpcEnv. 3. Fix the serv

spark git commit: [SPARK-11633][SQL] LogicalRDD throws TreeNode Exception : Failed to Copy Node

2015-11-19 Thread marmbrus
Repository: spark Updated Branches: refs/heads/master 72d150c27 -> 276a7e130 [SPARK-11633][SQL] LogicalRDD throws TreeNode Exception : Failed to Copy Node When handling self joins, the implementation did not consider the case insensitivity of HiveContext. It could cause an exception as shown

spark git commit: [SPARK-11633][SQL] LogicalRDD throws TreeNode Exception : Failed to Copy Node

2015-11-19 Thread marmbrus
Repository: spark Updated Branches: refs/heads/branch-1.6 9b5dc5c48 -> 5f86c001c [SPARK-11633][SQL] LogicalRDD throws TreeNode Exception : Failed to Copy Node When handling self joins, the implementation did not consider the case insensitivity of HiveContext. It could cause an exception as sh

spark git commit: [SPARK-11848][SQL] Support EXPLAIN in DataSet APIs

2015-11-19 Thread marmbrus
Repository: spark Updated Branches: refs/heads/master 276a7e130 -> 7d4aba187 [SPARK-11848][SQL] Support EXPLAIN in DataSet APIs When debugging DataSet API, I always need to print the logical and physical plans. I am wondering if we should provide a simple API for EXPLAIN? Author: gatorsmile

spark git commit: [SPARK-11848][SQL] Support EXPLAIN in DataSet APIs

2015-11-19 Thread marmbrus
Repository: spark Updated Branches: refs/heads/branch-1.6 5f86c001c -> 6021852e0 [SPARK-11848][SQL] Support EXPLAIN in DataSet APIs When debugging DataSet API, I always need to print the logical and physical plans. I am wondering if we should provide a simple API for EXPLAIN? Author: gators

spark git commit: [SPARK-11750][SQL] revert SPARK-11727 and code clean up

2015-11-19 Thread marmbrus
Repository: spark Updated Branches: refs/heads/master 7d4aba187 -> 47d1c2325 [SPARK-11750][SQL] revert SPARK-11727 and code clean up After some experiment, I found it's not convenient to have separate encoder builders: `FlatEncoder` and `ProductEncoder`. For example, when create encoders for

spark git commit: [SPARK-11750][SQL] revert SPARK-11727 and code clean up

2015-11-19 Thread marmbrus
Repository: spark Updated Branches: refs/heads/branch-1.6 6021852e0 -> b8069a23f [SPARK-11750][SQL] revert SPARK-11727 and code clean up After some experiment, I found it's not convenient to have separate encoder builders: `FlatEncoder` and `ProductEncoder`. For example, when create encoders

spark git commit: [SPARK-11778][SQL] parse table name before it is passed to lookupRelation

2015-11-19 Thread marmbrus
Repository: spark Updated Branches: refs/heads/branch-1.6 b8069a23f -> fdffc400c [SPARK-11778][SQL] parse table name before it is passed to lookupRelation Fix a bug in DataFrameReader.table (table with schema name such as "db_name.table" doesn't work) Use SqlParser.parseTableIdentifier to par

spark git commit: [SPARK-11778][SQL] parse table name before it is passed to lookupRelation

2015-11-19 Thread marmbrus
Repository: spark Updated Branches: refs/heads/master 47d1c2325 -> 470007453 [SPARK-11778][SQL] parse table name before it is passed to lookupRelation Fix a bug in DataFrameReader.table (table with schema name such as "db_name.table" doesn't work) Use SqlParser.parseTableIdentifier to parse t

spark git commit: [SPARK-11812][PYSPARK] invFunc=None works properly with python's reduceByKeyAndWindow

2015-11-19 Thread tdas
Repository: spark Updated Branches: refs/heads/master 470007453 -> 599a8c6e2 [SPARK-11812][PYSPARK] invFunc=None works properly with python's reduceByKeyAndWindow invFunc is optional and can be None. Instead of invFunc (the parameter) invReduceFunc (a local function) was checked for trueness

spark git commit: [SPARK-11812][PYSPARK] invFunc=None works properly with python's reduceByKeyAndWindow

2015-11-19 Thread tdas
Repository: spark Updated Branches: refs/heads/branch-1.5 9957925e4 -> 001c44667 [SPARK-11812][PYSPARK] invFunc=None works properly with python's reduceByKeyAndWindow invFunc is optional and can be None. Instead of invFunc (the parameter) invReduceFunc (a local function) was checked for true

spark git commit: [SPARK-11812][PYSPARK] invFunc=None works properly with python's reduceByKeyAndWindow

2015-11-19 Thread tdas
Repository: spark Updated Branches: refs/heads/branch-1.6 fdffc400c -> abe393024 [SPARK-11812][PYSPARK] invFunc=None works properly with python's reduceByKeyAndWindow invFunc is optional and can be None. Instead of invFunc (the parameter) invReduceFunc (a local function) was checked for true

spark git commit: [SPARK-11812][PYSPARK] invFunc=None works properly with python's reduceByKeyAndWindow

2015-11-19 Thread tdas
Repository: spark Updated Branches: refs/heads/branch-1.4 eda1ff4ee -> 5118abb4e [SPARK-11812][PYSPARK] invFunc=None works properly with python's reduceByKeyAndWindow invFunc is optional and can be None. Instead of invFunc (the parameter) invReduceFunc (a local function) was checked for true

spark git commit: [SPARK-11812][PYSPARK] invFunc=None works properly with python's reduceByKeyAndWindow

2015-11-19 Thread tdas
Repository: spark Updated Branches: refs/heads/branch-1.3 5278ef0f1 -> 387d81891 [SPARK-11812][PYSPARK] invFunc=None works properly with python's reduceByKeyAndWindow invFunc is optional and can be None. Instead of invFunc (the parameter) invReduceFunc (a local function) was checked for true

[5/5] spark git commit: [SPARK-11858][SQL] Move sql.columnar into sql.execution.

2015-11-19 Thread rxin
[SPARK-11858][SQL] Move sql.columnar into sql.execution. In addition, tightened visibility of a lot of classes in the columnar package from private[sql] to private[columnar]. Author: Reynold Xin Closes #9842 from rxin/SPARK-11858. (cherry picked from commit 014c0f7a9dfdb1686fa9aeacaadb2a17a85

[3/5] spark git commit: [SPARK-11858][SQL] Move sql.columnar into sql.execution.

2015-11-19 Thread rxin
http://git-wip-us.apache.org/repos/asf/spark/blob/ea1a51fc/sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/InMemoryColumnarTableScan.scala -- diff --git a/sql/core/src/main/scala/org/apache/spark/sql/execution/colu

[2/5] spark git commit: [SPARK-11858][SQL] Move sql.columnar into sql.execution.

2015-11-19 Thread rxin
http://git-wip-us.apache.org/repos/asf/spark/blob/ea1a51fc/sql/core/src/test/scala/org/apache/spark/sql/columnar/PartitionBatchPruningSuite.scala -- diff --git a/sql/core/src/test/scala/org/apache/spark/sql/columnar/PartitionBatch

[5/5] spark git commit: [SPARK-11858][SQL] Move sql.columnar into sql.execution.

2015-11-19 Thread rxin
[SPARK-11858][SQL] Move sql.columnar into sql.execution. In addition, tightened visibility of a lot of classes in the columnar package from private[sql] to private[columnar]. Author: Reynold Xin Closes #9842 from rxin/SPARK-11858. Project: http://git-wip-us.apache.org/repos/asf/spark/repo Co

[3/5] spark git commit: [SPARK-11858][SQL] Move sql.columnar into sql.execution.

2015-11-19 Thread rxin
http://git-wip-us.apache.org/repos/asf/spark/blob/014c0f7a/sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/InMemoryColumnarTableScan.scala -- diff --git a/sql/core/src/main/scala/org/apache/spark/sql/execution/colu

[1/5] spark git commit: [SPARK-11858][SQL] Move sql.columnar into sql.execution.

2015-11-19 Thread rxin
Repository: spark Updated Branches: refs/heads/branch-1.6 abe393024 -> ea1a51fc1 http://git-wip-us.apache.org/repos/asf/spark/blob/ea1a51fc/sql/core/src/test/scala/org/apache/spark/sql/execution/columnar/compression/TestCompressibleColumnBuilder.scala ---

[4/5] spark git commit: [SPARK-11858][SQL] Move sql.columnar into sql.execution.

2015-11-19 Thread rxin
http://git-wip-us.apache.org/repos/asf/spark/blob/ea1a51fc/sql/core/src/main/scala/org/apache/spark/sql/columnar/compression/CompressionScheme.scala -- diff --git a/sql/core/src/main/scala/org/apache/spark/sql/columnar/compression

[2/5] spark git commit: [SPARK-11858][SQL] Move sql.columnar into sql.execution.

2015-11-19 Thread rxin
http://git-wip-us.apache.org/repos/asf/spark/blob/014c0f7a/sql/core/src/test/scala/org/apache/spark/sql/columnar/PartitionBatchPruningSuite.scala -- diff --git a/sql/core/src/test/scala/org/apache/spark/sql/columnar/PartitionBatch

[1/5] spark git commit: [SPARK-11858][SQL] Move sql.columnar into sql.execution.

2015-11-19 Thread rxin
Repository: spark Updated Branches: refs/heads/master 599a8c6e2 -> 014c0f7a9 http://git-wip-us.apache.org/repos/asf/spark/blob/014c0f7a/sql/core/src/test/scala/org/apache/spark/sql/execution/columnar/compression/TestCompressibleColumnBuilder.scala ---

[4/5] spark git commit: [SPARK-11858][SQL] Move sql.columnar into sql.execution.

2015-11-19 Thread rxin
http://git-wip-us.apache.org/repos/asf/spark/blob/014c0f7a/sql/core/src/main/scala/org/apache/spark/sql/columnar/compression/CompressionScheme.scala -- diff --git a/sql/core/src/main/scala/org/apache/spark/sql/columnar/compression

spark git commit: [SPARK-11831][CORE][TESTS] Use port 0 to avoid port conflicts in tests

2015-11-19 Thread andrewor14
Repository: spark Updated Branches: refs/heads/branch-1.6 ea1a51fc1 -> a4a71b0a5 [SPARK-11831][CORE][TESTS] Use port 0 to avoid port conflicts in tests Use port 0 to fix port-contention-related flakiness Author: Shixiong Zhu Closes #9841 from zsxwing/SPARK-11831. (cherry picked from commit

spark git commit: [SPARK-11831][CORE][TESTS] Use port 0 to avoid port conflicts in tests

2015-11-19 Thread andrewor14
Repository: spark Updated Branches: refs/heads/master 014c0f7a9 -> 90d384dcb [SPARK-11831][CORE][TESTS] Use port 0 to avoid port conflicts in tests Use port 0 to fix port-contention-related flakiness Author: Shixiong Zhu Closes #9841 from zsxwing/SPARK-11831. Project: http://git-wip-us.ap

spark git commit: [SPARK-11799][CORE] Make it explicit in executor logs that uncaught e…

2015-11-19 Thread andrewor14
Repository: spark Updated Branches: refs/heads/branch-1.6 a4a71b0a5 -> 6a88251ac [SPARK-11799][CORE] Make it explicit in executor logs that uncaught e… …xceptions are thrown during executor shutdown This commit will make sure that when uncaught exceptions are prepended with [Container in

spark git commit: [SPARK-11799][CORE] Make it explicit in executor logs that uncaught e…

2015-11-19 Thread andrewor14
Repository: spark Updated Branches: refs/heads/master 90d384dcb -> 3bd77b213 [SPARK-11799][CORE] Make it explicit in executor logs that uncaught e… …xceptions are thrown during executor shutdown This commit will make sure that when uncaught exceptions are prepended with [Container in shu

spark git commit: [SPARK-11828][CORE] Register DAGScheduler metrics source after app id is known.

2015-11-19 Thread andrewor14
Repository: spark Updated Branches: refs/heads/master 3bd77b213 -> f7135ed71 [SPARK-11828][CORE] Register DAGScheduler metrics source after app id is known. Author: Marcelo Vanzin Closes #9820 from vanzin/SPARK-11828. Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http:

spark git commit: [SPARK-11828][CORE] Register DAGScheduler metrics source after app id is known.

2015-11-19 Thread andrewor14
Repository: spark Updated Branches: refs/heads/branch-1.6 6a88251ac -> d087acadf [SPARK-11828][CORE] Register DAGScheduler metrics source after app id is known. Author: Marcelo Vanzin Closes #9820 from vanzin/SPARK-11828. Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: h

spark git commit: [SPARK-11746][CORE] Use cache-aware method dependencies

2015-11-19 Thread andrewor14
Repository: spark Updated Branches: refs/heads/branch-1.6 d087acadf -> baae1ccc9 [SPARK-11746][CORE] Use cache-aware method dependencies a small change Author: hushan Closes #9691 from suyanNone/unify-getDependency. (cherry picked from commit 01403aa97b6aaab9b86ae806b5ea9e82690a741f) Signe

spark git commit: [SPARK-11275][SQL] Incorrect results when using rollup/cube

2015-11-19 Thread yhuai
Repository: spark Updated Branches: refs/heads/branch-1.6 baae1ccc9 -> 70d4edda8 [SPARK-11275][SQL] Incorrect results when using rollup/cube Fixes bug with grouping sets (including cube/rollup) where aggregates that included grouping expressions would return the wrong (null) result. Also sim

spark git commit: [SPARK-11275][SQL] Incorrect results when using rollup/cube

2015-11-19 Thread yhuai
Repository: spark Updated Branches: refs/heads/master 01403aa97 -> 37cff1b1a [SPARK-11275][SQL] Incorrect results when using rollup/cube Fixes bug with grouping sets (including cube/rollup) where aggregates that included grouping expressions would return the wrong (null) result. Also simplif

spark git commit: [SPARK-11746][CORE] Use cache-aware method dependencies

2015-11-19 Thread andrewor14
Repository: spark Updated Branches: refs/heads/master f7135ed71 -> 01403aa97 [SPARK-11746][CORE] Use cache-aware method dependencies a small change Author: hushan Closes #9691 from suyanNone/unify-getDependency. Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git

spark git commit: [SPARK-4134][CORE] Lower severity of some executor loss logs.

2015-11-19 Thread andrewor14
Repository: spark Updated Branches: refs/heads/master 37cff1b1a -> 880128f37 [SPARK-4134][CORE] Lower severity of some executor loss logs. Don't log ERROR messages when executors are explicitly killed or when the exit reason is not yet known. Author: Marcelo Vanzin Closes #9780 from vanzin/

spark git commit: [SPARK-4134][CORE] Lower severity of some executor loss logs.

2015-11-19 Thread andrewor14
Repository: spark Updated Branches: refs/heads/branch-1.6 70d4edda8 -> ff3497542 [SPARK-4134][CORE] Lower severity of some executor loss logs. Don't log ERROR messages when executors are explicitly killed or when the exit reason is not yet known. Author: Marcelo Vanzin Closes #9780 from van

spark git commit: [SPARK-11845][STREAMING][TEST] Added unit test to verify TrackStateRDD is correctly checkpointed

2015-11-19 Thread andrewor14
Repository: spark Updated Branches: refs/heads/branch-1.6 ff3497542 -> 19ea30d82 [SPARK-11845][STREAMING][TEST] Added unit test to verify TrackStateRDD is correctly checkpointed To make sure that all lineage is correctly truncated for TrackStateRDD when checkpointed. Author: Tathagata Das

spark git commit: [SPARK-11845][STREAMING][TEST] Added unit test to verify TrackStateRDD is correctly checkpointed

2015-11-19 Thread andrewor14
Repository: spark Updated Branches: refs/heads/master 880128f37 -> b2cecb80e [SPARK-11845][STREAMING][TEST] Added unit test to verify TrackStateRDD is correctly checkpointed To make sure that all lineage is correctly truncated for TrackStateRDD when checkpointed. Author: Tathagata Das Clo

spark git commit: [SPARK-11831][CORE][TESTS] Use port 0 to avoid port conflicts in tests (backport to branch 1.5)

2015-11-19 Thread andrewor14
Repository: spark Updated Branches: refs/heads/branch-1.5 001c44667 -> 6fe1ce6ab [SPARK-11831][CORE][TESTS] Use port 0 to avoid port conflicts in tests (backport to branch 1.5) backport #9841 to branch 1.5 Author: Shixiong Zhu Closes #9850 from zsxwing/SPARK-11831-branch-1.5. Project: ht

spark git commit: [SPARK-11864][SQL] Improve performance of max/min

2015-11-19 Thread rxin
Repository: spark Updated Branches: refs/heads/master b2cecb80e -> ee2140774 [SPARK-11864][SQL] Improve performance of max/min This PR has the following optimization: 1) The greatest/least already does the null-check, so the `If` and `IsNull` are not necessary. 2) In greatest/least, it shou

spark git commit: [SPARK-11864][SQL] Improve performance of max/min

2015-11-19 Thread rxin
Repository: spark Updated Branches: refs/heads/branch-1.6 19ea30d82 -> 8b34fb0b8 [SPARK-11864][SQL] Improve performance of max/min This PR has the following optimization: 1) The greatest/least already does the null-check, so the `If` and `IsNull` are not necessary. 2) In greatest/least, it

spark git commit: [SPARK-11544][SQL][TEST-HADOOP1.0] sqlContext doesn't use PathFilter

2015-11-19 Thread yhuai
Repository: spark Updated Branches: refs/heads/master ee2140774 -> 7ee7d5a3c [SPARK-11544][SQL][TEST-HADOOP1.0] sqlContext doesn't use PathFilter Apply the user supplied pathfilter while retrieving the files from fs. Author: Dilip Biswal Closes #9830 from dilipbiswal/spark-11544. Project:

spark git commit: [SPARK-11544][SQL][TEST-HADOOP1.0] sqlContext doesn't use PathFilter

2015-11-19 Thread yhuai
Repository: spark Updated Branches: refs/heads/branch-1.6 8b34fb0b8 -> a936fa5c5 [SPARK-11544][SQL][TEST-HADOOP1.0] sqlContext doesn't use PathFilter Apply the user supplied pathfilter while retrieving the files from fs. Author: Dilip Biswal Closes #9830 from dilipbiswal/spark-11544. (cher

spark git commit: [SPARK-11846] Add save/load for AFTSurvivalRegression and IsotonicRegression

2015-11-19 Thread meng
Repository: spark Updated Branches: refs/heads/master 7ee7d5a3c -> 4114ce20f [SPARK-11846] Add save/load for AFTSurvivalRegression and IsotonicRegression https://issues.apache.org/jira/browse/SPARK-11846 mengxr Author: Xusen Yin Closes #9836 from yinxusen/SPARK-11846. Project: http://git

spark git commit: [SPARK-11846] Add save/load for AFTSurvivalRegression and IsotonicRegression

2015-11-19 Thread meng
Repository: spark Updated Branches: refs/heads/branch-1.6 a936fa5c5 -> 4774897f9 [SPARK-11846] Add save/load for AFTSurvivalRegression and IsotonicRegression https://issues.apache.org/jira/browse/SPARK-11846 mengxr Author: Xusen Yin Closes #9836 from yinxusen/SPARK-11846. (cherry picked f

spark git commit: [SPARK-11829][ML] Add read/write to estimators under ml.feature (II)

2015-11-19 Thread meng
Repository: spark Updated Branches: refs/heads/branch-1.6 4774897f9 -> d7b3d5785 [SPARK-11829][ML] Add read/write to estimators under ml.feature (II) Add read/write support to the following estimators under spark.ml: * ChiSqSelector * PCA * VectorIndexer * Word2Vec Author: Yanbo Liang Close

spark git commit: [SPARK-11829][ML] Add read/write to estimators under ml.feature (II)

2015-11-19 Thread meng
Repository: spark Updated Branches: refs/heads/master 4114ce20f -> 3b7f056da [SPARK-11829][ML] Add read/write to estimators under ml.feature (II) Add read/write support to the following estimators under spark.ml: * ChiSqSelector * PCA * VectorIndexer * Word2Vec Author: Yanbo Liang Closes #9

spark git commit: [SPARK-11875][ML][PYSPARK] Update doc for PySpark HasCheckpointInterval

2015-11-19 Thread meng
Repository: spark Updated Branches: refs/heads/master 3b7f056da -> 7216f4054 [SPARK-11875][ML][PYSPARK] Update doc for PySpark HasCheckpointInterval * Update doc for PySpark ```HasCheckpointInterval``` that users can understand how to disable checkpoint. * Update doc for PySpark ```cacheNodeI

spark git commit: [SPARK-11875][ML][PYSPARK] Update doc for PySpark HasCheckpointInterval

2015-11-19 Thread meng
Repository: spark Updated Branches: refs/heads/branch-1.6 d7b3d5785 -> 0a878ad0e [SPARK-11875][ML][PYSPARK] Update doc for PySpark HasCheckpointInterval * Update doc for PySpark ```HasCheckpointInterval``` that users can understand how to disable checkpoint. * Update doc for PySpark ```cacheN

spark git commit: [SPARK-11869][ML] Clean up TempDirectory properly in ML tests

2015-11-19 Thread meng
Repository: spark Updated Branches: refs/heads/branch-1.6 0a878ad0e -> 60d937529 [SPARK-11869][ML] Clean up TempDirectory properly in ML tests Need to remove parent directory (```className```) rather than just tempDir (```className/random_name```) I tested this with IDFSuite, which has 2 rea

spark git commit: [SPARK-11869][ML] Clean up TempDirectory properly in ML tests

2015-11-19 Thread meng
Repository: spark Updated Branches: refs/heads/master 7216f4054 -> 0fff8eb3e [SPARK-11869][ML] Clean up TempDirectory properly in ML tests Need to remove parent directory (```className```) rather than just tempDir (```className/random_name```) I tested this with IDFSuite, which has 2 read/wr

spark git commit: [SPARK-11867] Add save/load for kmeans and naive bayes

2015-11-19 Thread meng
Repository: spark Updated Branches: refs/heads/branch-1.6 60d937529 -> 1ce6394e3 [SPARK-11867] Add save/load for kmeans and naive bayes https://issues.apache.org/jira/browse/SPARK-11867 Author: Xusen Yin Closes #9849 from yinxusen/SPARK-11867. (cherry picked from commit 3e1d120cedb4bd9e159

spark git commit: [SPARK-11867] Add save/load for kmeans and naive bayes

2015-11-19 Thread meng
Repository: spark Updated Branches: refs/heads/master 0fff8eb3e -> 3e1d120ce [SPARK-11867] Add save/load for kmeans and naive bayes https://issues.apache.org/jira/browse/SPARK-11867 Author: Xusen Yin Closes #9849 from yinxusen/SPARK-11867. Project: http://git-wip-us.apache.org/repos/asf/s