David,
For extreme cases of short work supply, I agree the best cold hard analysis is
the work should go to the fastest app version. That is, if all resources are
destined to be idle part of the day anyhow, the projected turnaround is least
by sending the work to GPU. In terms of the psychological effect on
participants such cold hard logic may not be the best, but that's a separate
subject.
For less extreme cases, consider this scenario: A host with one day work buffer
settings has 12 hours of GPU tasks on hand and only 1 hour of CPU tasks. What
good does it do to increase the number of GPU tasks on disk when a CPU is going
to go idle in an hour? Granted, the host will make other requests and perhaps
eventually get up to 24 hours of GPU work, thereafter the idled CPUs can become
productive again. In my view it makes much more sense to ensure that some of
the work is sent to CPU so neither CPU nor GPU go idle. That is, my assumption
is that the availability of tasks varies somewhat randomly, but over moderate
amounts of time is adequate to keep all resources productive.
A simple method of handling both cases with my suggested approach would require
that the CPUs are projected to go idle an hour or more before the GPU in order
to send CPU work first. Then if the host is really starved for work the GPU
might get all the available tasks if there are very few.
--
Joe
On Thu, 22 Mar 2012 00:29:24 -0400, David Anderson <[email protected]>
wrote:
> Josef:
> I don't understand the reasoning here.
> If there aren't enough jobs to satisfy both the CPU and GPU requests,
> isn't it better to send jobs only for GPU?
>
> Is there a scenario where the current policy results in
> less throughput (i.e. less credit) than some other policy?
>
> -- David
>
> On 21-Mar-2012 9:09 PM, Josef W. Segur wrote:
>> The current method of choosing which app version is "best" for a given task
>> on a
>> host is based on the highest projected flops. It seems that quickest
>> projected
>> turnaround would be better for the project and make more sense to users. If
>> the
>> work request is for a single resource the choice does not differ, of course,
>> but
>> for two or more resources it may.
>>
>> Suppose a host asks for 120000 seconds of CPU work and 20000 seconds of
>> NVIDIA
>> GPU work. If the host has a quad-core CPU and a single GPU that's roughly
>> 30000
>> seconds for each CPU. IOW, the host is saying that it would likely start a
>> CPU
>> task 10000 seconds before a GPU task. That's only part of the turnaround
>> time,
>> but perhaps enough to base the choice on. That is, the first task would go to
>> CPU and its estimated time be subtracted from the CPU time request as always.
>> Then for the next task the balance might have shifted so the GPU gets that
>> one.
>>
>> Particularly when there's some limitation on the number of tasks available,
>> this
>> method would tend to keep all the host's resources useful to the project.
>> With
>> the current algorithm I've seen many complaints on SETI@home forums about
>> getting GPU work when the CPUs are about to (or have actually) run dry, and
>> some
>> for the opposite condition. It's basically that when the requests for both
>> types
>> are more than what's currently available, all assigned tasks go to one
>> resource.
>> The same resource is chosen on subsequent requests unless the requested
>> amount
>> of time for that resource is reached.
_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.