I'm not a Spark expert but:

What Spark does is run receivers in the executors. 
These receivers are a long-running task, each receiver occupies 1 core in
your executor, if an executor has more cores than receivers it can also
process (at least some of) the data that it is receiving. 

So, enough cores basically means allowing executors to process the data as
well as receiving it by giving each executor more cores than receivers (at
least 1 more than the number of receivers used by the executor). By allowing
the same executor to process the received data you're also avoiding (again
at least to some extent) moving the data inside the cluster which is
generally a good thing




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Streaming-minimum-cores-for-a-Receiver-tp25307p25316.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to