Hi,

Is there a way to get the number of slaves/workers during runtime?

I searched online but didn't find anything :/ The application I'm working
will run on different clusters corresponding to different deployment stages
(beta -> prod). It would be great to get the number of slaves currently in
use, in order set the level of parallelism and RDD partitions, based on that
number.

Thanks!
Nicolas



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Getting-the-number-of-slaves-tp10604.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to