hi,

I have cluster that has several nodes and every node has several cores. I'd 
like to run multi-core algorithm within every map. So I'd like to assure that 
there will be performed only one map per cluster node. Is there some way, how 
to assure this? It seems to me that it should be possible by spark.task.cpus as 
it is described at https://spark.apache.org/docs/latest/configuration.html, but 
it's not clear to me if the value is total number of CPUs per cluster or CPUs 
per cluster node?

Thank you in advance for any help and suggestions. 
 

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to