I like the idea of a linear flag that only uses percent complete and
 elapsed time.  In addition, if there were a way to set the flops vs iops
weights for the initial estimate, that would also be helpful. I have a
feeling that the flops vs iops is why some machines have accurate estimates
while others are off by as much as a factor of 10.

Jon

On Sun, Feb 9, 2014 at 12:01 PM, Michael Goetz <[email protected]>wrote:

> I don't know what percentage of apps behave this way, but certainly many
> apps behave in a completely linear fashion.  If they're reporting x%
> complete, then the time they've used will be exactly x% of of the total run
> time.  Everything at PrimeGrid behaves like this; After just a few minutes
> of running we can generally predict the total run time with an accuracy of
> about 99%.  To be sure this doesn't apply to all projects, but it certainly
> applies to some and possibly is true for most.
>
> If the BOINC client ignored it's own static time calculation and simply
> used the percentage complete and the cpu_time (or elapsed time in the case
> of GPU apps), wouldn't it almost always show an accurate time estimate?
>
> If an app (or perhaps app_version) could be designated as "linear", and the
> client used this flag to weight the dynamic calculation at 100% and ignore
> the static calculation, I think the estimated time problem would vanish for
> a significant number of projects.
>
> Mike
>
>
> On Sun, Feb 9, 2014 at 8:46 AM, William <[email protected]> wrote:
>
> > As Jon pointed out, it's very embarrassingly inaccurate early in the run.
> >
> > The negative exponential weighting of the dynamic estimate is the
> problem.
> >
> > Suggest weighting the dynamic estimate as:
> >
> >    MAX (1, (3 * the PERCENTAGE done))
> >
> > Then at 11% the weight of the dynamic estimate is 33% (vs. 1.21%).
> >
> >
> > ~~~~~
> > "Rightful liberty is unobstructed action according to our will within
> > limits drawn around us by the equal rights of others. I do not add
> 'within
> > the limits of the law' because law is often but the tyrant's will, and
> > always so when it violates the rights of the individual." - Thomas
> Jefferson
> >
> >
> >
> > On Saturday, February 8, 2014 8:44 PM, David Anderson <
> > [email protected]> wrote:
> >
> > The estimate is a weighted combination of
> > >static (based on wu.rsc_flops_est and avp->flops)
> > >and dynamic (based on fraction done and elapsed time)
> > >estimates; see below.
> > >The weight of the dynamic estimate is the square of the fraction done;
> > >e.g. when 50% done, the weight is 0.25.
> > >
> > >So at 11% done the estimate is based almost entirely on the static
> > estimate.
> > >
> > >-- David
> > >
> > >double ACTIVE_TASK::est_dur() {
> > >     if (fraction_done >= 1) return elapsed_time;
> > >     double wu_est = result->estimated_runtime();
> > >     if (fraction_done <= 0) return wu_est;
> > >     if (wu_est < elapsed_time) wu_est = elapsed_time;
> > >     double frac_est = fraction_done_elapsed_time / fraction_done;
> > >     double fd_weight = fraction_done * fraction_done;
> > >     double wu_weight = 1 - fd_weight;
> > >     double x = fd_weight*frac_est + wu_weight*wu_est;
> > >     return x;
> > >}
> > >
> > >double RESULT::estimated_runtime_uncorrected() {
> > >     return wup->rsc_fpops_est/avp->flops;
> > >}
> > >
> > >// estimate how long a result will take on this host
> > >//
> > >double RESULT::estimated_runtime() {
> > >     double x = estimated_runtime_uncorrected();
> > >     if (!project->dont_use_dcf) {
> > >         x *= project->duration_correction_factor;
> > >     }
> > >     return x;
> > >}
> > >
> > >
> > >On 08-Feb-2014 11:54 AM, Jon Sonntag wrote:
> > >> Why would 11% complete in 1.5 hours have an estimated 72 hours
> remaining
> > >> when it should be closer to 14 hours remaining?  Does BOINC need a
> math
> > >> tutor?  ;-)
> > >>
> > >> I find it interesting that the estimates on a Q6600 are correct but on
> > both
> > >> of my i7 hosts they are way too high.
> > >>
> > >> All hosts have all been running the app for several weeks so any
> > learning
> > >> curve by the smart estimate algorithm should have adjusted the numbers
> > >> already, right?  How long should it take BOINC to get the estimates
> > >> correct?  I would think less than an hour when percent complete is
> > totally
> > >> linear.  Or is the problem the that the benchmarks do not take into
> > account
> > >> hyper-threading which skews the estimates?
> > >>
> > >> Jon Sonntag
> > >> _______________________________________________
> > >> boinc_dev mailing list
> > >> [email protected]
> > >> http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
> > >> To unsubscribe, visit the above URL and
> > >> (near bottom of page) enter your email address.
> > >
> > >>
> > >_______________________________________________
> > >boinc_dev mailing list
> > >[email protected]
> > >http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
> > >To unsubscribe, visit the above URL and
> > >(near bottom of page) enter your email address.
> > >
> > >
> > >
> > _______________________________________________
> > boinc_dev mailing list
> > [email protected]
> > http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
> > To unsubscribe, visit the above URL and
> > (near bottom of page) enter your email address.
> >
> _______________________________________________
> boinc_dev mailing list
> [email protected]
> http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
> To unsubscribe, visit the above URL and
> (near bottom of page) enter your email address.
>
_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to