Hi Niek,

That's expected. Just answered on stackoverflow.

On Sun, Dec 25, 2016 at 8:07 AM, Niek <niek.bartholom...@gmail.com> wrote:

> Hi,
>
> I described my issue in full detail on
> http://stackoverflow.com/questions/41300223/spark-
> structured-steaming-from-kafka-last-message-processed-again-after-resume
>
> Any idea what's going wrong?
>
> Looking at the code base on
> https://github.com/apache/spark/blob/3f62e1b5d9e75dc07bac3aa4db3e8d
> 0615cc3cc3/sql/core/src/main/scala/org/apache/spark/sql/
> execution/streaming/StreamExecution.scala#L290,
> I don't understand why you are resuming with an already committed offset
> (the one from currrentBatchId - 1)
>
> Thanks,
>
> Niek.
>
>
>
> --
> View this message in context: http://apache-spark-
> developers-list.1001551.n3.nabble.com/Spark-structured-
> steaming-from-kafka-last-message-processed-again-after-
> resume-from-checkpoint-tp20353.html
> Sent from the Apache Spark Developers List mailing list archive at
> Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
>
>

Reply via email to