Amit Menashe created SPARK-32962:
------------------------------------

             Summary: Spark Streaming
                 Key: SPARK-32962
                 URL: https://issues.apache.org/jira/browse/SPARK-32962
             Project: Spark
          Issue Type: Bug
          Components: DStreams
    Affects Versions: 2.4.5
            Reporter: Amit Menashe


Hey there,

I'm using this spark streaming job which integrated with Kafka (and manage its 
offsets commitions at Kafka itself),

The problem is when I have a failure I want to repeat the work on  those offset 
ranges (that something went wrong with them) , therefore I catch the exception 
and NOT commit (with commitAsync) this range.

However I notice it keeps proceeding (without any commit made).

moreover I removed later all the commitAsync calls and I the stream keep 
proceeding!

I guess there might be any inner cache or something that helps the streaming 
job to consume the entries from Kafka.

 

Could you please advice?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to