I'm building a model in a stand alone cluster with just a single worker
limited to use 3 cores and 4GB ram.  The node starts up and spits out the
message:

Starting Spark worker 192.168.1.185:60203 with 3 cores, 4.0 GB RAM

During the model train (SVMWithSGD) the CPU on the worker is very low.  It
barley gets above 25% of a single core.  I've tried adjusting the number of
cores but no matter what it the CPU usage seems to average around 25% (of 1
just core).

The machine is running OS x with a quad core i7.

Is this expected behaviour?  It feels like the cpu is massively under
utilised.

Reply via email to