The current method of choosing which app version is "best" for a given task on 
a host is based on the highest projected flops. It seems that quickest 
projected turnaround would be better for the project and make more sense to 
users. If the work request is for a single resource the choice does not differ, 
of course, but for two or more resources it may.

Suppose a host asks for 120000 seconds of CPU work and 20000 seconds of NVIDIA 
GPU work. If the host has a quad-core CPU and a single GPU that's roughly 30000 
seconds for each CPU. IOW, the host is saying that it would likely start a CPU 
task 10000 seconds before a GPU task. That's only part of the turnaround time, 
but perhaps enough to base the choice on. That is, the first task would go to 
CPU and its estimated time be subtracted from the CPU time request as always. 
Then for the next task the balance might have shifted so the GPU gets that one.

Particularly when there's some limitation on the number of tasks available, 
this method would tend to keep all the host's resources useful to the project. 
With the current algorithm I've seen many complaints on SETI@home forums about 
getting GPU work when the CPUs are about to (or have actually) run dry, and 
some for the opposite condition. It's basically that when the requests for both 
types are more than what's currently available, all assigned tasks go to one 
resource. The same resource is chosen on subsequent requests unless the 
requested amount of time for that resource is reached.
-- 
                                                          Joe
_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to