[ 
https://issues.apache.org/jira/browse/SPARK-36984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukáš updated SPARK-36984:
--------------------------
    Description: 
The documentation at 
[https://spark.apache.org/docs/latest/streaming-programming-guide.html#advanced-sources]
 clearly states that *Kafka* (and Kinesis) are available in the Python API v 
3.1.2 in *Spark Streaming (DStreams)*. (see attachments for highlight)

However, there is no way to create DStream from Kafka in PySpark >= 3.0.0, as 
the `kafka.py` file is missing in 
[https://github.com/apache/spark/tree/master/python/pyspark/streaming]. I'm 
coming from PySpark 2.4.4 where this was possible. _Should Kafka be excluded as 
advanced source for spark streaming in Python API in the docs?_

 

Note that I'm aware of this Kafka integration guide 
[https://spark.apache.org/docs/latest/structured-streaming-kafka-integration.html]
 but I'm not interested in Structured Streaming as it doesn't support arbitrary 
stateful operations in Python. DStreams support this functionality with 
`updateStateByKey`.

  was:
The documentation at 
[https://spark.apache.org/docs/latest/streaming-programming-guide.html#advanced-sources]
 clearly states that *Kafka* (and Kinesis) are available in the Python API v 
3.1.2 in *Spark Streaming (DStreams)*. 

However, there is no way to create DStream from Kafka in PySpark >= 3.0.0, as 
the `kafka.py` file is missing in 
[https://github.com/apache/spark/tree/master/python/pyspark/streaming]. I'm 
coming from PySpark 2.4.4 where this was possible. _Should Kafka be excluded as 
advanced source for spark streaming in Python API in the docs?_

 

Note that I'm aware of this Kafka integration guide 
[https://spark.apache.org/docs/latest/structured-streaming-kafka-integration.html]
 but I'm not interested in Structured Streaming as it doesn't support arbitrary 
stateful operations in Python. DStreams support this functionality with 
`updateStateByKey`.

!image-2021-10-12-10-04-03-232.png!


> Misleading Spark Streaming source documentation
> -----------------------------------------------
>
>                 Key: SPARK-36984
>                 URL: https://issues.apache.org/jira/browse/SPARK-36984
>             Project: Spark
>          Issue Type: Documentation
>          Components: Documentation, PySpark, Structured Streaming
>    Affects Versions: 3.0.1, 3.0.2, 3.0.3, 3.1.0, 3.1.1, 3.1.2
>            Reporter: Lukáš
>            Priority: Trivial
>         Attachments: docs_highlight.png
>
>
> The documentation at 
> [https://spark.apache.org/docs/latest/streaming-programming-guide.html#advanced-sources]
>  clearly states that *Kafka* (and Kinesis) are available in the Python API v 
> 3.1.2 in *Spark Streaming (DStreams)*. (see attachments for highlight)
> However, there is no way to create DStream from Kafka in PySpark >= 3.0.0, as 
> the `kafka.py` file is missing in 
> [https://github.com/apache/spark/tree/master/python/pyspark/streaming]. I'm 
> coming from PySpark 2.4.4 where this was possible. _Should Kafka be excluded 
> as advanced source for spark streaming in Python API in the docs?_
>  
> Note that I'm aware of this Kafka integration guide 
> [https://spark.apache.org/docs/latest/structured-streaming-kafka-integration.html]
>  but I'm not interested in Structured Streaming as it doesn't support 
> arbitrary stateful operations in Python. DStreams support this functionality 
> with `updateStateByKey`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to