Hey, 

Can anyone explain how the job basically runs in spark? 

The number of mapper, reducers, the tmp files created and which tmp file
contains what data and how to set the number of reducer tasks as we do in
hadoop.

This would prove to be a big help. Thank you.





--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Query-execution-in-spark-tp1390.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to