Hi all,
I am curious how fault tolerance is achieved in spark. Well, more like what do 
I need to do to make sure my aggregations which comes from streams are fault 
tolerant and saved into cassandra. I will have nodes die and would not like to 
count "tuples" multiple times.

For example, in trident you have to implement different interfaces. Is there a 
similar process for spark?

Thanks
-Adrian

Reply via email to