Archives

Improving peer review through research

Here on ReimagineReview, we envision the constant improvement of peer review through experimentation and research. With concurrent research and outcome reporting, the projects we list have the potential to demonstrate whether their approach has improved peer review. The collective efforts of experimental peer review projects to monitor and transparently report on their outcomes will enable advancements in peer review practices.

ReimagineReview asks project representatives to share information about the outcomes they’re looking for and the results they monitor. We summarize some of these below to share what we have learned so far, but you can check out details by searching for individual projects on the registry or looking through data archived on Zenodo.  For fellow peer review innovators considering launching a trial or project, we offer some tips on tracking metrics and elements for robust monitoring of your project. 

Metrics

The most obvious way to track the efficacy of a project is usage statistics. This is sometimes publicly available in terms of number of comments or views, etc. Many projects report the number of comments, views, or downloads they receive as part of their ReimagineReview listings. However, it’s important to keep in mind that these figures are merely a snapshot in time.

Where a peer review project is offered as a trial option as part of an established journal, projects also can track opt-in rate. For example, eLife conducted an author-led peer review trial, which had one third of authors who opt in. In the same trial, the journal was able to track initial editorial success rate and correlation with last-author gender and career stage. In the 2016 Transparent Peer Review trial conducted by Nature Communications, 60% of authors opted to publish the peer review reports. The 2019 Transparent Peer Review trial conducted by Wiley had 77% author opt-in rates. Whether author opt-in rates increase over time in other transparent peer review projects may reveal global trends in author preferences. 

Surveys 

Usage statistics alone are likely insufficient to arrive at optimal designs for improving a given project or service. Direct feedback from users (including authors, reviewers, editors, and readers) can provide suggestions for improvements. For example, HIRMEOS categorized email responses to invitations to participate in its open review project in order to identify a list of reasons for non-participation and factors influencing annotators (such as contact with the author). ASM considered feedback in evaluating its mSphereDirect trial, and biOverlay described author reactions that influenced decision-making on the project’s future. Surveys can also offer a more systematic way to collect and organize feedback. Notably, The BMJ and Research Involvement and Engagement (RIE) conducted a survey of participants in their trial including patients and members of the general public in their review process. 

In order to facilitate comparison of innovations and user motivations across different projects, we designed a standardized survey for interested project leads to administer to their users. We had hoped that this would be the first landscape study of the state of peer review innovations, but we found that it is difficult to conduct a study that would be broadly compatible with all projects’ goals, data protection needs, and time constraints. We nevertheless hope that these questions can be a useful resource or inspiration for future surveys.

User demographics can be used to identify disparities

In peer review of scholarly output, publication success is stratified by ethnicity and other demographic traits of the researchers. Homophilic biases are present in reviewers and editors towards their matched demographic groups. For example, while there was no statistically significant difference between male and female authors in terms of their opt-in to the Nature double blind peer review trial, authors from institutions ranked low according to Times Higher Education were more likely to choose double-blind review. Similarly, the author-led peer review trial at eLife favored late-career authors.

Peer review projects are not immune to the inherent biases present in traditional peer review. In order to identify and address demographic based disparities, we encourage project leads to monitor user demographic while maintaining participant anonymity in compliance with local regulations such as GDPR and French laws regarding the storage of data on race and ethnicity. 

More now than ever before, peer review serves a critical role in validation of discoveries. However, the effectiveness of the standard peer review system is uncertain due to the lack of research and transparency. Peer review experiments can reach their full potential through transparent outcome reporting, thereby building an evidence base for improving all peer review. We will continue to document and promote peer review experiments and their findings here on ReimagineReview. 

 

Victoria and Jessica

Cover image

Comments

mood_bad
  • No comments yet.
  • Add a comment