I am working on window dstreams wherein each dstream contains 3 rdd with
following keys:
a,b,c
b,c,d
c,d,e
d,e,f
I want to get only unique keys across all dstream
a,b,c,d,e,f
How to do it in pyspark streaming?
--
View this message in context:
Hello,
I am using 1.6.0 version of Spark and trying to run window operation on
DStreams.
Window_TwoMin = 4*60*1000
Slide_OneMin = 2*60*1000
census = ssc.textFileStream("./census_stream/").filter(lambda a:
a.startswith('-') == False).map(lambda b: b.split("\t")) .map(lambda c:
Go it from a friend - println(model.weights) and println(model.intercept).
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Model-characterization-tp17985p18106.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.