[ https://issues.apache.org/jira/browse/SPARK-17728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15543969#comment-15543969 ]
Herman van Hovell commented on SPARK-17728: ------------------------------------------- First of all, we implement subexpression elimination (which is a form of memoization), and this should prevent multiple invocations from happening. I am quite curious why this is not triggering in your case. Are you on a completely interpreted path? Cost functions for a UDF is doable, we would have to this for expression trees though, and this is a non-trivial thing to implement. > UDFs are run too many times > --------------------------- > > Key: SPARK-17728 > URL: https://issues.apache.org/jira/browse/SPARK-17728 > Project: Spark > Issue Type: Bug > Components: Spark Core > Affects Versions: 2.0.0 > Environment: Databricks Cloud / Spark 2.0.0 > Reporter: Jacob Eisinger > Priority: Minor > Attachments: over_optimized_udf.html > > > h3. Background > Llonger running processes that might run analytics or contact external > services from UDFs. The response might not just be a field, but instead a > structure of information. When attempting to break out this information, it > is critical that query is optimized correctly. > h3. Steps to Reproduce > # Create some sample data. > # Create a UDF that returns a multiple attributes. > # Run UDF over some data. > # Create new columns from the multiple attributes. > # Observe run time. > h3. Actual Results > The UDF is executed *multiple times* _per row._ > h3. Expected Results > The UDF should only be executed *once* _per row._ > h3. Workaround > Cache the Dataset after UDF execution. > h3. Details > For code and more details, see [^over_optimized_udf.html] -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org