> programming the task will rotate.  Since we know what the answer  should 
> be we are validating the ENTIRE PROCESS from one end to the  other ...
Ok, you once again described how it should be.
But I still can't get why it is needed. This is the biggest difficulty with 
your proposal.
I and probably others just don't see the reason to do this, not how or could 
it be implemented or not, just why it should be implemented at all. That is 
the problem.
What such entire validation (please, take into account that your validation 
can be breaked just on NEXT TASK, it was pointed few times already) give us 
new ? "Entities should not be multiplied unnecessarily" as Occam's razor 
tell us.  What part of current BOINC validation structure it can replace?
No one IMO. Well, what part of BOINC validation system it could improve 
being ADDED to current validation system? My answer the same, no one. Could 
you specify what exactly part it can improve, taking into account 
possibility of random errors (I see no accounting for  this aspect in your 
posts still).



> is needed to establish the operational speed of that machine in CS ... 
> note that the point is to establish this with more reliability with  real 
> work on real machines over real execution times so that the  instability 
> of the benchmarks is eliminated as an issue.
Yes, running many of "almost" real tasks will improve credit estimation.
But :
1) at too high overhead to be valuable.
2) the same could be achieved by usage of REAL tasks and set of reference 
PCs.

>My personal  feeling is that using other mechanisms we can fill in the gap 
>and the  current benchmark can be eliminated ...
Yes, your type of benchmark can replace current benchmark, but will have no 
valuable benefits and will have just bigger overhead.



> The point is that instead of requiring the counting of FLOPS or Loops  or 
> anything else we establish a generalize earnings for a specific  computer 
> using a collective of work.  The more different work loads we  use the 
> more "accurate" our estimate.
Again, right in base part, indeed, if many particular estimation for each 
project will be replaced by single and more good established estimation 
cross-project credit parity will be improved.
But again, reference PCs look more preferable for same task than reference 
work.
They will achieve the same results but with less overhead involved.


> We make the assumption that the validator will catch errors... yet we 
> know that the validator is a program written by people.  The point is 
> that if I make a SaH signal and the program returns 15 signals there  is a 
> problem somewhere ... yet if that bad answer is paired up with  another 
> back answer that is the same the validators will accept both  answers. 
> And more and more projects are going to adaptive replication  and 
> validating on one task... so the idea that the redundant computing  is 
> going to catch errors is slowly being eroded ...
1) I'm against adaptive replication and consider adaptive replication 
experiment on SETI beta totally messed up.
Adaptive replication lowers our faith in result validity indeed. It suffers 
just from same flaw as your calibration tasks idea - it can't account for 
random error or more or less fast change of conditions.
CPU/GPU fan in absolutely trusted PC can eat too much dust... and we will 
have the same host returning invalid results. How soon we could catch this 
with calibration tasks or with adaptive replication? Not very soon. And the 
faster host is, the more invalid work it will produce before catch.
If and ONLY if project can accept lowering of result correctness value in 
exchange of increasing (please, note, this is trade-off - we pay precision 
for speed) performance, adaptive replication could be used.
And surely if project goes to such measure it needs all power it could get 
and will not waste its fraction to calibration tasks.

2) I get your point. Please, try to get my.
If I will make such signal and discover 15 signals instead of one in result 
file, I will immediately post this or E-mail to Eric. Then he could 
reproduce this issue and take measures, including bug-fixes in validator. No 
need to involve all participants PCs in that. I don't argue tests and 
calibrations unneeded, they needed of course! But they needed just ON 
ANOTHER LEVEL of hierarhy.

> THIS IS NOT THE COMPLETE PROPOSAL ... there are a myriad of  details ... 
> but I know that if I make it longer no one will read  it ... but this is 
> the core ...
Well, details usually go after basic idea approval, no probs with that.

_______________________________________________
boinc_dev mailing list
boinc_dev@ssl.berkeley.edu
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to