[ 
https://issues.apache.org/jira/browse/SPARK-13068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133285#comment-15133285
 ] 

Seth Hendrickson commented on SPARK-13068:
------------------------------------------

Upon closer examination, I think the best way to approach params on the Python 
side is to mimic the param subclasses that were created for Java on the Scala 
side. So, instead of having specific validation functions passed to each param, 
we would have subclasses of {{Param}} like 
{{IntParam,FloatParam,StringParam,..., ListIntParam,ListStringParam,...}}. Each 
subclass can define it's own type conversions and type checking mechanisms. 
{{expectedType}} would no longer be necessary and we can pass in an {{isValid}} 
function as an optional parameter as a way of ensuring valid values are passed. 
This is flexible enough to handle future cases, is fairly intuitive, and 
closely mimics the Scala params. I appreciate thoughts and feedback.

> Extend pyspark ml paramtype conversion to support lists
> -------------------------------------------------------
>
>                 Key: SPARK-13068
>                 URL: https://issues.apache.org/jira/browse/SPARK-13068
>             Project: Spark
>          Issue Type: Improvement
>          Components: ML, PySpark
>            Reporter: holdenk
>            Priority: Trivial
>
> In SPARK-7675 we added type conversion for PySpark ML params. We should 
> follow up and support param type conversion for lists and nested structures 
> as required. This blocks having all PySpark ML params having type information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to