I don't think this has anything to do with transferring anything from
the driver, or per task. I'm talking about a singleton object in the
JVM that loads whatever you want from wherever you want and holds it
in memory once per JVM. That is, I do not think you have to use
broadcast, or even any
Hi Sean,
Thanks for your advice, a normal 'val' will suffice. But will it be
serialized and transferred every batch and every partition? That's why
broadcast exists, right?
For now I'm going to use 'val', but I'm still looking for a broadcast-way
solution.
On Sun, Jan 18, 2015 at 5:36 PM, Sean
I think that this problem is not Spark-specific since you are simply side
loading some data into memory. Therefore you do not need an answer that
uses Spark.
Simply load the data and then poll for an update each time it is accessed?
Or some reasonable interval? This is just something you write in
Hi,
After some experiments, there're three methods that work in this 'join
DStream with other dataset which is updated periodically'.
1. Create an RDD in transform operation
val words = ssc.socketTextStream(localhost, ).flatMap(_.split(_))
val filtered = words transform { rdd =
val spam =
Can't you send a special event through spark streaming once the list is
updated? So you have your normal events and a special reload event
Le 17 janv. 2015 15:06, Ji ZHANG zhangj...@gmail.com a écrit :
Hi,
I want to join a DStream with some other dataset, e.g. join a click
stream with a spam
Hi,
I want to join a DStream with some other dataset, e.g. join a click
stream with a spam ip list. I can think of two possible solutions, one
is use broadcast variable, and the other is use transform operation as
is described in the manual.
But the problem is the spam ip list will be updated