flush() guarantees completion of all futures returned by addData(Object,
Object)
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteDataStreamer.html#flush--
flush() will send the batch, but it is still possible for the server to
crash before the message reaches it.
If you
Thanks for the answers.
I resolved the problem of reconnection using events. It worked very well.
What I found, is the following...
The KafkaStreamer consumes records and send them to the IgniteDataStreamer.
It doesn't handle the IgniteFuture returned.
If the connection with the server is
Hi,
I think listening to events would be a good solution for you.
There are two discovery events that are triggered on the client node when it
is disconnected from or reconnected to the cluster:
EVT_CLIENT_NODE_DISCONNECTED
EVT_CLIENT_NODE_RECONNECTED
see:
I forgot to mention, I'm starting the KafkaStreamer in a cluster service.
Pretty similar to all the examples that are around.
I saw the exception in the documentation, my concern here is where should I
catch it given that I initialize and setup the streamer on the init() method
and start it in
Hi,
You can use disconnect events/exception, and then use KafkaStreamer.stop.
see:
https://ignite.apache.org/docs/latest/clustering/connect-client-nodes#client-disconnectedreconnected-events
https://ignite.apache.org/docs/latest/clustering/connect-client-nodes
Here look for: While a
Hi all, I'm having some problems dealing with the KafkaStreamer.
I have a deployment with a streamer (client node) that consumes records from
a Kafka topic, and a data node (cache storage).
If for any reason, the cache node crashes or simple restarts, the client
node gets disconnected, but the