[ 
https://issues.apache.org/jira/browse/SPARK-35695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17367571#comment-17367571
 ] 

Dongjoon Hyun commented on SPARK-35695:
---------------------------------------

Thank you, [~tanelk]. This is important fix. Although the scope is broader, 
I'll collect this to a subtask of SPARK-33828 to give more visibility.

> QueryExecutionListener does not see any observed metrics fired before 
> persist/cache
> -----------------------------------------------------------------------------------
>
>                 Key: SPARK-35695
>                 URL: https://issues.apache.org/jira/browse/SPARK-35695
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 3.2.0
>            Reporter: Tanel Kiis
>            Assignee: Tanel Kiis
>            Priority: Major
>             Fix For: 3.0.3, 3.2.0, 3.1.3
>
>
> This example properly fires the event
> {code}
> spark.range(100)
>   .observe(
>     name = "other_event",
>     avg($"id").cast("int").as("avg_val"))
>   .collect()
> {code}
> But when I add persist, then no event is fired or seen (not sure which):
> {code}
> spark.range(100)
>   .observe(
>     name = "my_event",
>     avg($"id").cast("int").as("avg_val"))
>   .persist()
>   .collect()
> {code}
> The listener:
> {code}
>     val metricMaps = ArrayBuffer.empty[Map[String, Row]]
>     val listener = new QueryExecutionListener {
>       override def onSuccess(funcName: String, qe: QueryExecution, duration: 
> Long): Unit = {
>         metricMaps += qe.observedMetrics
>       }
>       override def onFailure(funcName: String, qe: QueryExecution, exception: 
> Exception): Unit = {
>         // No-op
>       }
>     }
>     spark.listenerManager.register(listener)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to