Reliability of reviewers' ratings when using public peer review: a case study
Authors: Bornmann, L.; Daniel, H.-D.
Source: Learned Publishing, Volume 23, Number 2, April 2010 , pp. 124-131(8)
Abstract:If a manuscript meets scientific standards and contributes to the advancement of science, it can be expected that two or more reviewers will agree on its value. Manuscripts are rated reliably when there is a high level of agreement between independent reviewers. This study investigates for the first time whether inter-rater reliability, which is low with the traditional model of closed peer review, is also low with the new system of public peer review or whether higher coefficients can be found for public peer review. To investigate this question we examined the peer-review process practiced by the interactive open access journal Atmospheric Chemistry and Physics (based on 465 manuscripts submitted between 2004 and 2006 receiving 1,058 reviews in total). The results of the study show that inter-rater reliability is low (kappa coefficient) or reasonable (Intraclass Correlation Coefficient) in public peer review.
Document Type: Research Article
Publication date: 2010-04-01
- Editor in Chief: Alan Singleton
North American Editor: Diane Scott-Lichter
Reviews Editor: Pippa Smart
Learned Publishing is the journal of the Association of Learned and Professional Society Publishers, published in collaboration with the Society for Scholarly Publishing. The journal is published quarterly in January/April/July/October.
Learned Publishing articles are available free online to members of ALPSP and SSP.
ALPSP members: log in to www.alpsp.org. If you do not have a password contact email@example.com
SSP members: log in to the Member Center using your membership username and password. Further information firstname.lastname@example.org