handles
back-pressure gracefully.
Thanks a lot in advance!
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Streaming-fault-tolerance-benchmark-tp27528.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
Hi guys,
I have a question about how the basics of D-Streams, accumulators, failure
and speculative execution interact.
Let's say I have a streaming app that takes a stream of strings, formats
them (let's say it converts each to Unicode), and prints them (e.g. on a
news ticker). I know print()
Hello all,
I wrote a blog post around the issue I reported before:
http://metabroadcast.com/blog/design-your-spark-streaming-cluster-carefully
Can I ask some feedback from who's already using Spark Streaming in
production? How do you deal with fault tolerance and scalability?
Thanks a lot for
Reading the Spark Streaming Programming Guide I found a couple of
interesting points. First of all, while talking about receivers, it says:
*If the number of cores allocated to the application is less than or equal
to the number of input DStreams / receivers, then the system will receive
data,