[ 
https://issues.apache.org/jira/browse/SPARK-14831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15260981#comment-15260981
 ] 

Xiangrui Meng commented on SPARK-14831:
---------------------------------------

+1 on `read.ml` and `write.ml`, which are consistent with `read.df` and 
`write.df` and leave space for future features. Putting the discussions 
together, we have:

* read.ml and write.ml for saving/loading ML models
* "spark." prefix to ML algorithms, especially if we cannot closely match 
existing R methods or have to shadow them. This includes:
** spark.glm and glm (which doesn't shadow stats::glm)
** spark.kmeans
** spark.naiveBayes
** spark.survreg

For methods with `spark.` prefix, I suggest the following signature:

{code:none}
spark.kmeans(df, formula, [required params], [optional params], ...)
{code}

Sounds good?

> Make ML APIs in SparkR consistent
> ---------------------------------
>
>                 Key: SPARK-14831
>                 URL: https://issues.apache.org/jira/browse/SPARK-14831
>             Project: Spark
>          Issue Type: Improvement
>          Components: ML, SparkR
>    Affects Versions: 2.0.0
>            Reporter: Xiangrui Meng
>            Assignee: Xiangrui Meng
>            Priority: Critical
>
> In current master, we have 4 ML methods in SparkR:
> {code:none}
> glm(formula, family, data, ...)
> kmeans(data, centers, ...)
> naiveBayes(formula, data, ...)
> survreg(formula, data, ...)
> {code}
> We tried to keep the signatures similar to existing ones in R. However, if we 
> put them together, they are not consistent. One example is k-means, which 
> doesn't accept a formula. Instead of looking at each method independently, we 
> might want to update the signature of kmeans to
> {code:none}
> kmeans(formula, data, centers, ...)
> {code}
> We can also discuss possible global changes here. For example, `glm` puts 
> `family` before `data` while `kmeans` puts `centers` after `data`. This is 
> not consistent. And logically, the formula doesn't mean anything without 
> associating with a DataFrame. So it makes more sense to me to have the 
> following signature:
> {code:none}
> algorithm(df, formula, [required params], [optional params])
> {code}
> If we make this change, we might want to avoid name collisions because they 
> have different signature. We can use `ml.kmeans`, 'ml.glm`, etc.
> Sorry for discussing API changes in the last minute. But I think it would be 
> better to have consistent signatures in SparkR.
> cc: [~shivaram] [~josephkb] [~yanboliang]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to