Well, OK.
Any probability calculations for accepting invalid result in replication of 
2 versus adaptive replication available already?
I feel that adaptive replication will increase chances to get invalid result 
passed inside database by orders of magnitude.
It can make BOINC as whole untrusted system, not single participant's PC.
Again, even host with absolute clear history can start to produce invalid 
results  at some point at time.
For example: addition of semi-broken new device to host will not change host 
number so old spotless history will be used until regular check will open 
eyes for BOINC. How often you will do such check? If too often - what the 
reason to mess with this adaptive replication at all? If pretty rare - many 
invalid results can pass validation before BOINC could react on this (what I 
meant calling such mechanism "clumsy" ).
Lack of validation _for each result_ = accepting invalid result in database. 
Changing conditions for adaptive validation = just changing anticipated % 
level of such results.


----- Original Message ----- 
From: "David Anderson" <da...@ssl.berkeley.edu>
To: <john.mcl...@sybase.com>
Cc: "BOINC Developers Mailing List" <boinc_dev@ssl.berkeley.edu>
Sent: Monday, November 09, 2009 7:56 PM
Subject: Re: [boinc_dev] [boinc_projects] new credit design


> That's pretty much what it does.  Have you looked at the code?
> -- David
>
> john.mcl...@sybase.com wrote:
>> Adaptive replication should track a machines validation and error 
>> history.
>> Machines that have high error rates (and the machine you are describing 
>> has
>> a high error rate) will have a very low chance of running without
>> validation.  On the other hand machines that never have validation errors
>> will have a very high chance of running solo.
>>
>> The way I would do it is to store a success fraction per computer (1 -
>> (errors + aborts + invalid)/total tasks).  The calculation of whether to
>> actually issue another task after this one would be:  (R - (N + 1))*F*C
>> where R is the replication level requested by the project (one based), 
>> and
>> N is the replication number of this replication (0 based), and F is the
>> Success Fraction for this project on this computer, and C is some 
>> constant
>> to prevent computers have regular errors from ever running solo.  Since 
>> (R
>> - (N + 1)) is 0 for the last requested replicant, no others will be 
>> issued
>> unless there is an error or late task.  If C is 10, then only tasks that
>> have better than 90% success rate will EVER run solo in a 2 replicant
>> system.  C could be a project setting, but it should never be allowed to 
>> be
>> set to less than 1.  Arguably, 10 is about right.
>>
> _______________________________________________
> boinc_dev mailing list
> boinc_dev@ssl.berkeley.edu
> http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
> To unsubscribe, visit the above URL and
> (near bottom of page) enter your email address.
> 

_______________________________________________
boinc_dev mailing list
boinc_dev@ssl.berkeley.edu
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to