[ 
https://issues.apache.org/jira/browse/SPARK-32500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17168423#comment-17168423
 ] 

Abhishek Dixit commented on SPARK-32500:
----------------------------------------

[~rohitmishr1484]

I used the following example to reproduce the issue. I used Spark version 2.4.6
{code:java}
def foreach_batch_function(stream_df, epoch_id):
    stream_df.write \
    .format("json") \
    .mode('Append') \
    .option("path", "/tmp/testdata/output") \
    .save()

from pyspark.sql.functions import *

checkpoint_location = "/tmp/testdata/checkpoint"                                
                
query = (
  
spark.readStream.format("rate").option("rowsPerSecond",100).option("numPartitions",4)
  .load()
    .writeStream
    .foreachBatch(foreach_batch_function)
    .option("checkpointLocation", checkpoint_location)                          
 
    .outputMode("Append")
    .start()
)        
{code}
 

Let me know if you need any more information.

> Query and Batch Id not set for Structured Streaming Jobs in case of 
> ForeachBatch in PySpark
> -------------------------------------------------------------------------------------------
>
>                 Key: SPARK-32500
>                 URL: https://issues.apache.org/jira/browse/SPARK-32500
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark, Structured Streaming
>    Affects Versions: 2.4.6
>            Reporter: Abhishek Dixit
>            Priority: Major
>         Attachments: Screen Shot 2020-07-30 at 9.04.21 PM.png
>
>
> Query Id and Batch Id information is not available for jobs started by 
> structured streaming query when _foreachBatch_ API is used in PySpark.
> This happens only with foreachBatch in pyspark. ForeachBatch in scala works 
> fine, and also other structured streaming sinks in pyspark work fine. I am 
> attaching a screenshot of jobs pages.
> I think job group is not set properly when _foreachBatch_ is used via 
> pyspark. I have a framework that depends on the _queryId_ and _batchId_ 
> information available in the job properties and so my framework doesn't work 
> for pyspark-foreachBatch use case.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to