Hi,
The best method is to allow for a collective footprint to determine the
value of an article. Igor is good with grammar, Constantine is good with
figure clarity, Bramblebush is good with literature reviews, Thor is good
with experimental design, The Ice Man is all about the methodology, etc.
etc.
It is naive to think there is such thing as an expert reviewer.
Reviewers are good and this and that and whatever -- pigeonholed
perspectives... The combination of all these inputs is the expert
review. In a collective model, the whole group is accountable.
See ya,
Marko.
If homeostasis is the problem, this would seem to increase
conservativism. Reviewers would not want their names attached to
papers that might turn out to be wrong (perhaps embarrassingly so.)
On Jan 28, 2009, at 11:25 AM, Peter Lissaman wrote:
The idea is that a published paper should be preceded by the names
of the
reviewers for and agin said work. That terrifies the profs! Still
throws
no light on the naysayers if a paper is rejected!
Peter Lissaman, Da Vinci Ventures
Expertise is not knowing everything, but knowing what to look for.
1454 Miracerros Loop South, Santa Fe, New Mexico 87505
TEL: (505) 983-7728FAX: (505) 983-1694
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Michael T. Nygard
mtnyg...@gmail.com
http://www.michaelnygard.com/
Release It! Design and Deploy Production-Ready Software
http://bit.ly/ReleaseIt
Beautiful Architecture
http://bit.ly/BeautifulArchitecture
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org