[ 
https://issues.apache.org/jira/browse/SPARK-13068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15131344#comment-15131344
 ] 

holdenk commented on SPARK-13068:
---------------------------------

This seems like a good direction, the current approach only works for params 
which are inside of Spark since it isn't really extensible. That being said 
there probably aren't going to be a lot of people making custom scala 
estimators wanting to expose them in Python.

Right now these validators that your talking about are just looking at type, 
but on the Scala side the validators are also able to check range and similar 
things. Might be worth looking at params.scala for ideas.

> Extend pyspark ml paramtype conversion to support lists
> -------------------------------------------------------
>
>                 Key: SPARK-13068
>                 URL: https://issues.apache.org/jira/browse/SPARK-13068
>             Project: Spark
>          Issue Type: Improvement
>          Components: ML, PySpark
>            Reporter: holdenk
>            Priority: Trivial
>
> In SPARK-7675 we added type conversion for PySpark ML params. We should 
> follow up and support param type conversion for lists and nested structures 
> as required. This blocks having all PySpark ML params having type information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to