[ https://issues.apache.org/jira/browse/SPARK-14831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15254551#comment-15254551 ]
Shivaram Venkataraman commented on SPARK-14831: ----------------------------------------------- 1. Agree. I think a valid policy could be that if we are able to support say most of the functionality in the base R function then we add the overload method. All methods though will have the spark.<methodName> variant that. We can do one pass right now to add spark.<methodName> and remove the overloads that don't match the base R functionality well enough. 2. We have so far used `read.df` and `write.df` to save and load data frames. I think read.model and write.model might work (I can't find a overloaded method in R for that) but I'm also fine if we just want to have a separate set of commands for models. > Make ML APIs in SparkR consistent > --------------------------------- > > Key: SPARK-14831 > URL: https://issues.apache.org/jira/browse/SPARK-14831 > Project: Spark > Issue Type: Improvement > Components: ML, SparkR > Affects Versions: 2.0.0 > Reporter: Xiangrui Meng > Assignee: Xiangrui Meng > Priority: Critical > > In current master, we have 4 ML methods in SparkR: > {code:none} > glm(formula, family, data, ...) > kmeans(data, centers, ...) > naiveBayes(formula, data, ...) > survreg(formula, data, ...) > {code} > We tried to keep the signatures similar to existing ones in R. However, if we > put them together, they are not consistent. One example is k-means, which > doesn't accept a formula. Instead of looking at each method independently, we > might want to update the signature of kmeans to > {code:none} > kmeans(formula, data, centers, ...) > {code} > We can also discuss possible global changes here. For example, `glm` puts > `family` before `data` while `kmeans` puts `centers` after `data`. This is > not consistent. And logically, the formula doesn't mean anything without > associating with a DataFrame. So it makes more sense to me to have the > following signature: > {code:none} > algorithm(df, formula, [required params], [optional params]) > {code} > If we make this change, we might want to avoid name collisions because they > have different signature. We can use `ml.kmeans`, 'ml.glm`, etc. > Sorry for discussing API changes in the last minute. But I think it would be > better to have consistent signatures in SparkR. > cc: [~shivaram] [~josephkb] [~yanboliang] -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org