On Apr 25, 2009, at 6:49 PM, Rom Walton wrote:

> IIRC, that was put into place as part of a way to deal with high
> priority vs. lower priority items.
>
> I don't remember if it was an internal hospital grid or some form of
> internal research grid, but the example that comes to mind is during  
> the
> normal course of the grid's operation they process 10-20 minute tasks.
> Every once in awhile they have to plow through some high priority  
> work,
> so they shorten the deadlines.  For some reason processing MRI's comes
> to mind.
>
> I think the target goal was to have a complete turn-around for the
> high-priority tasks within 5 hours or something like that.

Even so, unless you obtained more than 4 hours of work, a reschedule  
once an hour is still sufficient.  Especially when stats tells me that  
the actual delay will be half the actual delay (on average).

Effectively we have the "false positive" detection problem.

And, if my i7 were processing 10-20 minute tasks, or even a 4 core,  
you would have a slot opening up, on average of once every 2-5  
minutes ... so we are back to bank teller queues and FIFO with "jump  
ahead" capabilities so that the guy with the doctor's appointment and  
the bloodily dripping leg can get some cash so he can make his co- 
pay ...

If there is a priority interruptus need there is likely a better way  
to get to there from here and I seriously think we should consider  
changing this rule.

Now I am really back to the only reason to re-run the bulk of that  
process / procedure / function is when a task has ended and we have a  
free resource or when TSI is hit.  And TSI should not be used in  
calculating deadline constraints.  Instead we should be tracking the  
average time interval between task completions / TSI suspensions (for  
me TSI suspensions will pull the number out, for some in) ... and a  
safety margin (1.5 times this average?  Though I think that is high, I  
could live with it I suppose)... and use this in place of the place  
where we are using TSI.

I think these changes *MAY* allow us to cure this problem of  
suspending too many tasks too often.

The only remaining issue will be the cases where you get a batch of  
tasks with nearly identical deadlines and the horizon is shorter than  
normal... my bug-a-boo is usually IBERCIVIS where I get a batch of  
6-10 tasks and the client decides that all 6-10 have to be run at the  
same time violating the "interesting" work mix rules ... one of the  
reasons I asked for the "pipeline" size parameter per project so that  
I could say no more than 1 task per project per CPU (use at most  
rule?) I am not convinced that this is a good rule as an absolute...   
but I cannot think of a better suggestion.

There is a Trak ticked about this related to HT (back in the i7 design  
and the new Xeons) where the suggestion is that CPU 0 and CPU 1 should  
be assigned FP heavy and INT heavy tasks to get the best out of the  
system.  Not sure THAT is practical either, but, by limiting, to the  
extent possible the number of tasks run from any one specific project  
gets us close...

> I'm not sure if this scenario is accurate anymore.

So, why are we subjecting ourselves to constraints that don't arise in  
the real world of today?
_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to