Hi All,
there is information in 1.0.0 Spark's documentation that
there is an option "--cores" that one can use to set the number of cores
that spark-shell uses on the cluster:

You can also pass an option --cores <numCores> to control the number of
cores that spark-shell uses on the cluster.

This option does not seem to work for me.
If run the following command:
./spark-shell --cores 12
I'm keep on getting an error:
bad option: '--cores'

Is there any other way of controlling the total number of cores used by
sparkshell?

Thanks,
Marek

Reply via email to