Well, we haven't really enabled recovery after running into this issue in
Spark 1.2. I do intend to try this again soon with Spark 1.6.1 and see if it
works out this time.
NB
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-recovery-takes-long
then
but was confused by documentation talking about never changing the
broadcasted variables.
I've tried it on a local mode process till now and does seem to work as
intended. When (and if !) we start running on a real cluster, I hope this
holds up.
Thanks
NB
On Tue, May 19, 2015 at 6:25 AM, Imran Rashid iras
Hello,
Once a broadcast variable is created using sparkContext.broadcast(), can it
ever be updated again? The use case is for something like the underlying
lookup data changing over time.
Thanks
NB
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com
but
everything is recreated from the checkpointed data.
Hope this helps,
NB
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/kafka-Spark-Streaming-with-checkPointing-fails-to-restart-tp22864p22878.html
Sent from the Apache Spark User List mailing list archive
of the intermediate
streams were not recovered properly and caused some of our computations to
produce incorrect values.
Any help/insights into how to go about tackling this will be appreciated.
Thanks
NB.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark