Hello everybody,
I'm new to spark streaming and played a bit around with WordCount and a
PageRank-Algorithm in a cluster-environment.  
Am I right, that in the cluster each executor computes data stream
separately? And that the result of each executor is independent of the other
executors? 
In the "non-streaming" spark applications each action-operation merges the
data from the executors and compute one result. or is this wrong?
Is it possible in streaming-context to merge several streams like in a
reduce and compute one result?

greetz



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/basic-streaming-question-tp14653.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to