Here's official spark document about batch size/interval:
http://spark.apache.org/docs/latest/streaming-programming-guide.html#setting-the-right-batch-size

spark is batch oriented processing. As you mentioned, the streaming
is continuous flow, and core spark can not handle it. 

Spark streaming bridges the gap between the continuous flow and 
batch oriented processing. It generates an RDD from continuous
data flow/stream every batch interval, then the spark can process
them as normal RDDs.
 



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Streaming-batchDuration-for-streaming-tp14469p14487.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to