> In a reply to Raistmer I made a quick summary which you should also
> read, but yes the attempt is to:
I so like structured answers! Great, it can be commented item by item, very 
productive approach indeed. So will do short comments here too, sorry :)

> a) benchmark more accurately
Yes, it deed. But this could be considere only as some additional benefit 
cause this precision not too needed as we see now.

> b) allow consistent credit so that historical credit and now credit
> are comparable and not being debased over time

Well, yes, better credit measuring and constant credit definition will make 
them comparable.
The question is - what purpose of credits we hope to see? RAC as performance 
indicator tool (I personally use RAC as performance indicator time to time 
cause see no other sense in whole credits , again, personally.Some others 
too like to climb credit ladder, I see no bad in that as long as it not 
disturb results itself) OR credits as some social engineering tool to 
attract more participants.
Probably, this aspect of credit even more important for projects themselves 
though it could make whole credit system just unneeded for part of 
participants.

> c) reward people for doing work regardless of it being for the
> benchmarking and testing or for, ahem, "real" science
Not agree, "fake" science should not only not to be rewarded, it should 
actually be punished IMO!
In case of intentionally cheat/fake it wasted project resources. In case of 
misbehaving host it wastes participant's resources (electricity at least!) 
and should be discovered and repaired ASAP and nor rewarded as accepted 
behavior.

> d) establish quality measurements of the hardware systems
Again, can be consideres only as additional bonus, not main goal.

> e) allow end-to-end software validation across the entire network and
> not merely in the lab
Not needed on this level.

> f) allow for the detection and identification of systemic errors that
> we are not currently aware of ...
Like e) it's alpha/beta projects work, not mainstream.

> g) lay the ground work for the establishment of quality metrics
these metrics not account for random errors.

> h) establish cross-project credit parity
Yes, but consider overhead.


_______________________________________________
boinc_dev mailing list
boinc_dev@ssl.berkeley.edu
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to