Hi Team,

I have an sample spark application which reads from Kafka using direct API &
then does some transformation & stores to cassandra (using
saveToCassandra(....)).

If Cassandra goes down, then application logs NoHostAvailable exception (as
expected). But in the mean time the new incoming messages are lost, as the
Direct API creates new checkpoint & deletes the previous one's.

Does that mean, I should handle the exception at application side?

Or is there any other hook to handle the same?

Thanks in advance.

Regards,
Sam



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-streaming-cassandra-Fault-Tolerance-tp24625.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to