[ https://issues.apache.org/jira/browse/SPARK-17728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15544309#comment-15544309 ]
Jacob Eisinger commented on SPARK-17728: ---------------------------------------- Also, it is interesting for me to note that this occurs for parquets --- and not generating the Dataset in memory. For example, {code} val as = spark.read.parquet("/tmp/as.parquet") {code} triggers the behavior, but {code} val as = (1 to 10).toDF("a") {code} does not. > UDFs are run too many times > --------------------------- > > Key: SPARK-17728 > URL: https://issues.apache.org/jira/browse/SPARK-17728 > Project: Spark > Issue Type: Bug > Components: Spark Core > Affects Versions: 2.0.0 > Environment: Databricks Cloud / Spark 2.0.0 > Reporter: Jacob Eisinger > Priority: Minor > Attachments: over_optimized_udf.html > > > h3. Background > Llonger running processes that might run analytics or contact external > services from UDFs. The response might not just be a field, but instead a > structure of information. When attempting to break out this information, it > is critical that query is optimized correctly. > h3. Steps to Reproduce > # Create some sample data. > # Create a UDF that returns a multiple attributes. > # Run UDF over some data. > # Create new columns from the multiple attributes. > # Observe run time. > h3. Actual Results > The UDF is executed *multiple times* _per row._ > h3. Expected Results > The UDF should only be executed *once* _per row._ > h3. Workaround > Cache the Dataset after UDF execution. > h3. Details > For code and more details, see [^over_optimized_udf.html] -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org