To estimate confidence level for result returned under adaptive replication 
and vitality of this approach I propose such experiment:
Run on SETI main (cause we talk about statistical approach small user/host 
base and much lower device diversity of SETI beta will not fit) under 
adaptive replication for some time, enough to determine "trusted" hosts and 
go to replication of 1 for them.
But keep track of all single replicated tasks.
Then reissue all single replicated tasks again (and keep in mind, that all 
their results alrady in science databese) and do standart validation between 
previously returned (and still keeped - it's required for this experiment) 
result and newly returned one. % of failed initially returned results will 
show experimentally if adaptive replication is viable idea or ... not :)

Such experiment also (if we will keep results we eventually can avoid master 
database damage so it can be done) will show how much performance benefit we 
really can get (what will be % of "trusted" hosts and so on).
One real complication - it requires to store recived results for pretty long 
time - big load on current SETI infrastructure possible.

wbr
Raistmer

_______________________________________________
boinc_dev mailing list
boinc_dev@ssl.berkeley.edu
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to