In classic MapReduce/Hadoop, you may optionally define setup() and cleanup() 
methods.
They ( setup() and cleanup() ) are called for each task, so if you have 20 
mappers running, the setup/cleanup will be called for each one.
What is the equivalent of these in Spark?

Thanks,
best regards,
Mahmoud


Reply via email to