[ https://issues.apache.org/jira/browse/SPARK-17728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15542689#comment-15542689 ]
Jacob Eisinger commented on SPARK-17728: ---------------------------------------- Thanks for the explanation and the tricky code snippet! I kind of figured it was optimizing incorrectly / over optimizing. It sounds like this is not a defect because normally this optimization of collapsing projects is the desired route. Correct? Do you think it is worth filing a feature request to allow working with costly UDFs? Possibly: * Memoize UDFs / other transforms on a per row basis. * Manually override costs for UDFs. > UDFs are run too many times > --------------------------- > > Key: SPARK-17728 > URL: https://issues.apache.org/jira/browse/SPARK-17728 > Project: Spark > Issue Type: Bug > Components: Spark Core > Affects Versions: 2.0.0 > Environment: Databricks Cloud / Spark 2.0.0 > Reporter: Jacob Eisinger > Priority: Minor > Attachments: over_optimized_udf.html > > > h3. Background > Llonger running processes that might run analytics or contact external > services from UDFs. The response might not just be a field, but instead a > structure of information. When attempting to break out this information, it > is critical that query is optimized correctly. > h3. Steps to Reproduce > # Create some sample data. > # Create a UDF that returns a multiple attributes. > # Run UDF over some data. > # Create new columns from the multiple attributes. > # Observe run time. > h3. Actual Results > The UDF is executed *multiple times* _per row._ > h3. Expected Results > The UDF should only be executed *once* _per row._ > h3. Workaround > Cache the Dataset after UDF execution. > h3. Details > For code and more details, see [^over_optimized_udf.html] -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org