Thanks for the answers.
I resolved the problem of reconnection using events. It worked very well.
What I found, is the following...
The KafkaStreamer consumes records and send them to the IgniteDataStreamer.
It doesn't handle the IgniteFuture returned.
If the connection with the server is interrupted (server restart for
example) the KafkaStreamer is stoped, kafka consumers are stoped, but those
records that were sent to the streamer and (I believe) are in the buffer
are still trying to be saved in the cache.
There is no way to recover them as far as I know.
Am I right?
Should I implement a custom KafkaStreamer that, in that situation, handles
the IgniteFuture and let's say retry the insertion in the cache?

Another question, I'm using a grid service to start the streamer. What is
the benefit of this vs a simple spring service if I'm using kubernetes for
deployment?




On Fri, Nov 20, 2020 at 5:01 PM akorensh <alexanderko...@gmail.com> wrote:

> Hi,
>   I think listening to events would be a good solution for you.
>
> There are two discovery events that are triggered on the client node when
> it
> is disconnected from or reconnected to the cluster:
>
> EVT_CLIENT_NODE_DISCONNECTED
>
> EVT_CLIENT_NODE_RECONNECTED
>
> see:
>
> https://ignite.apache.org/docs/latest/clustering/connect-client-nodes#client-disconnectedreconnected-events
>
>
> As for StreamReceiver: Keep in mind that the logic implemented in a stream
> receiver is executed on the node where data is to be stored.  If the server
> where the data resides crashes, your code might not execute.
> https://ignite.apache.org/docs/latest/data-streaming#stream-visitor
>
> Thanks, Alex
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


-- 
Facundo Maldonado

Reply via email to