It also seems fairly obvious that this is an "experiment" that you can 
do without actually changing any parameters or "doing a test."

David Anderson wrote:
> s...@home has already studied this.
> Most errors come from relatively few hosts.
> Most jobs are done by hosts that NEVER have errors.
> Adaptive replication identifies these hosts
> and periodically checks them in case they go bad.
> Always using replication with reliable hosts is a waste of time
> and carbon emissions.
> Other projects report the same thing.
> 
> Any unreplicated computing is not 100% reliable.
> (nor is replicated computing, but it's much closer).
> But all computational science applications that I know of
> are unaffected by small error rates.
> For example, in s...@home, a few false positives don't matter because
> they won't be matched by companion signals.
> A few false negatives don't matter because we cover the sky many times
> (actually, they don't matter anyway).
> Genetic algorithms tolerate errors.  And so on.
> 
> Mathematical searches want an error rate as close to zero as possible.
> They should use 2X replication.
> 
> -- David
> 
> Raistmer wrote:
>> To estimate confidence level for result returned under adaptive replication 
>> and vitality of this approach I propose such experiment:
>> Run on SETI main (cause we talk about statistical approach small user/host 
>> base and much lower device diversity of SETI beta will not fit) under 
>> adaptive replication for some time, enough to determine "trusted" hosts and 
>> go to replication of 1 for them.
>> But keep track of all single replicated tasks.
>> Then reissue all single replicated tasks again (and keep in mind, that all 
>> their results alrady in science databese) and do standart validation between 
>> previously returned (and still keeped - it's required for this experiment) 
>> result and newly returned one. % of failed initially returned results will 
>> show experimentally if adaptive replication is viable idea or ... not :)
>>
>> Such experiment also (if we will keep results we eventually can avoid master 
>> database damage so it can be done) will show how much performance benefit we 
>> really can get (what will be % of "trusted" hosts and so on).
>> One real complication - it requires to store recived results for pretty long 
>> time - big load on current SETI infrastructure possible.
>>
>> wbr
>> Raistmer
>>
>> _______________________________________________
>> boinc_dev mailing list
>> boinc_dev@ssl.berkeley.edu
>> http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
>> To unsubscribe, visit the above URL and
>> (near bottom of page) enter your email address.
> 
> _______________________________________________
> boinc_dev mailing list
> boinc_dev@ssl.berkeley.edu
> http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
> To unsubscribe, visit the above URL and
> (near bottom of page) enter your email address.
> 
_______________________________________________
boinc_dev mailing list
boinc_dev@ssl.berkeley.edu
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to