The client tries to use all GPUs, even if this overcommits the CPUs.
The assumption is that this maximizes throughput.
Is there evidence that it doesn't?
-- David

On 12-Apr-2011 4:51 AM, Bernd Machenschalk wrote:
> Hi!
>
> We're experimenting with running BOINC on a cluster of GPU nodes. Our
> application takes a full core per NVidia GPU (avg_ncpus = 1.0). The BOINC
> Client is told to use only one CPU core (for now), i.e.<ncpus>1</ncpus>  in
> cc_config.xml.
>
> However the Client starts as many tasks as there are GPUs on that node. When
> scheduling GPU tasks, does the Client ignore the number of available cores,
> expecting that there will always be more cores than GPUs? If so I'd consider
> this a bug.
>
> Best, Bernd
>
> _______________________________________________ boinc_dev mailing list
> [email protected]
> http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev To unsubscribe,
> visit the above URL and (near bottom of page) enter your email address.
_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to