I'm not sure how it is implemented, but in general I wouldn't expect such
behavior on the connectors which read from non-streaming fashion storages.
The query result may depend on "when" the records are fetched.

If you need to reflect the changes in your query you'll probably want to
find a way to retrieve "change logs" from your external storage (or how
your system/product can also produce change logs if your external storage
doesn't support it), and adopt it to your query. There's a keyword you can
google to read further, "Change Data Capture".

Otherwise, you can apply the traditional approach, run a batch query
periodically and replace entire outputs.

On Thu, Jun 25, 2020 at 1:26 PM Rahul Kumar <rk20.stor...@gmail.com> wrote:

> Hello everyone,
>
> I was wondering, how Cassandra spark connector deals with deleted/updated
> record while readstream operation. If the record was already fetched in
> spark memory, and it got updated or deleted in database, does it get
> reflected in streaming join?
>
> Thanks,
> Rahul
>
>
>
> --
> Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>

Reply via email to