[ https://issues.apache.org/jira/browse/SPARK-7536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Yanbo Liang updated SPARK-7536: ------------------------------- Description: For new public APIs added to MLlib, we need to check the generated HTML doc and compare the Scala & Python versions. We need to track: * Inconsistency: Do class/method/parameter names match? * Docs: Is the Python doc missing or just a stub? We want the Python doc to be as complete as the Scala doc. * Missing classes/methods/parameters: We should create to-do JIRAs for functionality missing from Python. ** LDA *** Power Iteration Clustering was: For new public APIs added to MLlib, we need to check the generated HTML doc and compare the Scala & Python versions. We need to track: * Inconsistency: Do class/method/parameter names match? * Docs: Is the Python doc missing or just a stub? We want the Python doc to be as complete as the Scala doc. * Missing classes/methods/parameters: We should create to-do JIRAs for functionality missing from Python. ** LDA ** Power Iteration Clustering > Audit MLlib Python API for 1.4 > ------------------------------ > > Key: SPARK-7536 > URL: https://issues.apache.org/jira/browse/SPARK-7536 > Project: Spark > Issue Type: Sub-task > Components: MLlib, PySpark > Reporter: Joseph K. Bradley > Assignee: Yanbo Liang > > For new public APIs added to MLlib, we need to check the generated HTML doc > and compare the Scala & Python versions. We need to track: > * Inconsistency: Do class/method/parameter names match? > * Docs: Is the Python doc missing or just a stub? We want the Python doc to > be as complete as the Scala doc. > * Missing classes/methods/parameters: We should create to-do JIRAs for > functionality missing from Python. > ** LDA > *** Power Iteration Clustering -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org