[ https://issues.apache.org/jira/browse/SPARK-21542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16611085#comment-16611085 ]
John Bauer commented on SPARK-21542: ------------------------------------ You don't show your code for __init__ or setParams. I recall getting this error before using the @keyword_only decorator, for example see https://stackoverflow.com/questions/32331848/create-a-custom-transformer-in-pyspark-ml I will be trying to get my custom transformer pipeline to persist sometime next week I hope. If I succeed, I will try to provide an example if no one else has. > Helper functions for custom Python Persistence > ---------------------------------------------- > > Key: SPARK-21542 > URL: https://issues.apache.org/jira/browse/SPARK-21542 > Project: Spark > Issue Type: New Feature > Components: ML, PySpark > Affects Versions: 2.2.0 > Reporter: Ajay Saini > Assignee: Ajay Saini > Priority: Major > Fix For: 2.3.0 > > > Currently, there is no way to easily persist Json-serializable parameters in > Python only. All parameters in Python are persisted by converting them to > Java objects and using the Java persistence implementation. In order to > facilitate the creation of custom Python-only pipeline stages, it would be > good to have a Python-only persistence framework so that these stages do not > need to be implemented in Scala for persistence. > This task involves: > - Adding implementations for DefaultParamsReadable, DefaultParamsWriteable, > DefaultParamsReader, and DefaultParamsWriter in pyspark. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org