[ 
https://issues.apache.org/jira/browse/SPARK-36263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17676265#comment-17676265
 ] 

Nick Hryhoriev commented on SPARK-36263:
----------------------------------------

Hi, I find this feature does not work for `foreach` or `foreachPartition` 
actions.
Maybe because they use `rdd.foreach` under the hood.
Look like it's because `foreach` or `foreachPartition` actions do not work with 
`QueryExecutionListener`.
Example to reproduce
[https://gist.github.com/GrigorievNick/e7cf9ec5584b417d9719e2812722e6d3]
Do I miss something or is it's a know issue?

> Add Dataset.observe(Observation, Column, Column*) to PySpark
> ------------------------------------------------------------
>
>                 Key: SPARK-36263
>                 URL: https://issues.apache.org/jira/browse/SPARK-36263
>             Project: Spark
>          Issue Type: New Feature
>          Components: PySpark
>    Affects Versions: 3.3.0
>            Reporter: Enrico Minack
>            Assignee: Enrico Minack
>            Priority: Major
>             Fix For: 3.3.0
>
>
> With SPARK-34806 we now have a way to use the `Dataset.observe` method 
> without the need to interact with 
> `org.apache.spark.sql.util.QueryExecutionListener`. This allows us to easily 
> retrieve observations in PySpark.
> Adding a `Dataset.observe(Observation, Column, Column*)` equivalent to 
> PySpark's `DataFrame` is straightforward while it allows to utilise 
> observations from Python.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to