[jira] [Comment Edited] (SPARK-21972) Allow users to control input data persistence in ML Estimators via a handlePersistence ml.Param

2017-09-19 Thread zhengruifeng (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-21972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16171342#comment-16171342
 ] 

zhengruifeng edited comment on SPARK-21972 at 9/19/17 8:54 AM:
---

Since persistence handling is very algorithm-dependent, I perfer this solution:
1, add method {{preprocess(dataset: Dataset[_]): DataFrame}} in {{Predictor}}, 
and call it before {{train()}}, we do casting, columns selection and data 
persistence (if necessary) in it.
2, add method {{postprocess}}, and call it after {{train()}}, we unpersist the 
intermediate dataframe if needed.
3, for specific purpose, we can override it, like in current PR, 
{{GeneralizedLinearRegression}} override {{preprocess}} to add a column 
selection and casting for {{offsetCol}}.

I personally think that, to handle persistence, we can use more information: 
for examples, for algs containing param {{maxIter}}, if {{maxIter}} is set 0 or 
1, the input dataset may be unnecsssary to cache.


was (Author: podongfeng):
Since persistence handling is very algorithm-dependent, I perfer this solution:
1, add method {preprocess(dataset: Dataset[_]): DataFrame} in {Predictor}, and 
call it before {train()), we do casting, columns selection and data persistence 
(if necessary) in it.
2, add method {postprocess}, and call it after {train()), we unpersist the 
intermediate dataframe if needed.
3, for specific purpose, we can override it, like in current PR, 
{GeneralizedLinearRegression} override {preprocess} to add a column selection 
and casting for {offsetCol}.

I personally think that, to handle persistence, we can use more information: 
for examples, for algs containing param {maxIter}, if {maxIter} is set 0 or 1, 
the input dataset may be unnecsssary to cache.

> Allow users to control input data persistence in ML Estimators via a 
> handlePersistence ml.Param
> ---
>
> Key: SPARK-21972
> URL: https://issues.apache.org/jira/browse/SPARK-21972
> Project: Spark
>  Issue Type: Improvement
>  Components: ML, MLlib
>Affects Versions: 2.2.0
>Reporter: Siddharth Murching
>
> Several Spark ML algorithms (LogisticRegression, LinearRegression, KMeans, 
> etc) call {{cache()}} on uncached input datasets to improve performance.
> Unfortunately, these algorithms a) check input persistence inaccurately 
> ([SPARK-18608|https://issues.apache.org/jira/browse/SPARK-18608]) and b) 
> check the persistence level of the input dataset but not any of its parents. 
> These issues can result in unwanted double-caching of input data & degraded 
> performance (see 
> [SPARK-21799|https://issues.apache.org/jira/browse/SPARK-21799]).
> This ticket proposes adding a boolean {{handlePersistence}} param 
> (org.apache.spark.ml.param) so that users can specify whether an ML algorithm 
> should try to cache un-cached input data. {{handlePersistence}} will be 
> {{true}} by default, corresponding to existing behavior (always persisting 
> uncached input), but users can achieve finer-grained control over input 
> persistence by setting {{handlePersistence}} to {{false}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-21972) Allow users to control input data persistence in ML Estimators via a handlePersistence ml.Param

2017-09-11 Thread Siddharth Murching (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-21972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16160624#comment-16160624
 ] 

Siddharth Murching edited comment on SPARK-21972 at 9/11/17 5:22 PM:
-

This issue was originally being worked on in this PR: 
[https://github.com/apache/spark/pull/17014|https://github.com/apache/spark/pull/17014]


was (Author: siddharth murching):
This issue is being worked on in this PR: 
[https://github.com/apache/spark/pull/17014|https://github.com/apache/spark/pull/17014]

> Allow users to control input data persistence in ML Estimators via a 
> handlePersistence ml.Param
> ---
>
> Key: SPARK-21972
> URL: https://issues.apache.org/jira/browse/SPARK-21972
> Project: Spark
>  Issue Type: Improvement
>  Components: ML, MLlib
>Affects Versions: 2.2.0
>Reporter: Siddharth Murching
>
> Several Spark ML algorithms (LogisticRegression, LinearRegression, KMeans, 
> etc) call {{cache()}} on uncached input datasets to improve performance.
> Unfortunately, these algorithms a) check input persistence inaccurately 
> ([SPARK-18608|https://issues.apache.org/jira/browse/SPARK-18608]) and b) 
> check the persistence level of the input dataset but not any of its parents. 
> These issues can result in unwanted double-caching of input data & degraded 
> performance (see 
> [SPARK-21799|https://issues.apache.org/jira/browse/SPARK-21799]).
> This ticket proposes adding a boolean {{handlePersistence}} param 
> (org.apache.spark.ml.param) so that users can specify whether an ML algorithm 
> should try to cache un-cached input data. {{handlePersistence}} will be 
> {{true}} by default, corresponding to existing behavior (always persisting 
> uncached input), but users can achieve finer-grained control over input 
> persistence by setting {{handlePersistence}} to {{false}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-21972) Allow users to control input data persistence in ML Estimators via a handlePersistence ml.Param

2017-09-10 Thread Siddharth Murching (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-21972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16160624#comment-16160624
 ] 

Siddharth Murching edited comment on SPARK-21972 at 9/11/17 3:46 AM:
-

This issue is being worked on in this PR: 
[https://github.com/apache/spark/pull/17014|https://github.com/apache/spark/pull/17014]


was (Author: siddharth murching):
Work has already begun on this in this PR: 
[https://github.com/apache/spark/pull/17014|https://github.com/apache/spark/pull/17014]

> Allow users to control input data persistence in ML Estimators via a 
> handlePersistence ml.Param
> ---
>
> Key: SPARK-21972
> URL: https://issues.apache.org/jira/browse/SPARK-21972
> Project: Spark
>  Issue Type: Improvement
>  Components: ML, MLlib
>Affects Versions: 2.2.0
>Reporter: Siddharth Murching
>
> Several Spark ML algorithms (LogisticRegression, LinearRegression, KMeans, 
> etc) call {{cache()}} on uncached input datasets to improve performance.
> Unfortunately, these algorithms a) check input persistence inaccurately (see 
> [SPARK-18608|https://issues.apache.org/jira/browse/SPARK-18608]) and b) check 
> the persistence level of the input dataset but not any of its parents. These 
> issues can result in unwanted double-caching of input data & degraded 
> performance (see 
> [SPARK-21799|https://issues.apache.org/jira/browse/SPARK-21799]).
> This ticket proposes adding a boolean {{handlePersistence}} param 
> (org.apache.spark.ml.param) so that users can specify whether an ML algorithm 
> should try to cache un-cached input data. {{handlePersistence}} will be 
> {{true}} by default, corresponding to existing behavior (always persisting 
> uncached input), but users can achieve finer-grained control over input 
> persistence by setting {{handlePersistence}} to {{false}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org