Refreshing the F1000 model: transitioning to editorial-led peer reviewer selection - F1000

Refreshing the F1000 model: transitioning to editorial-led peer reviewer selection

Refreshing the F1000 model: transitioning to editorial-led peer reviewer selection
By f1000

In this blog, F1000’s Managing Director, Rebecca Lawrence, outlines the first of several adjustments to the F1000 model. This first tweak removes a current author pain point: suggesting reviewers.

When F1000Research first launched in 2013, it set out to rethink how research is communicated, with the aim of greatly accelerating the sharing of new findings, bringing control of the process back to the authors and the research community, and providing transparency across the full process, from underlying research data and code through to open peer review. These remain crucial principles on which all the F1000 platforms are based.

However, with now over ten years’ experience of running this model, as well as observing a plethora of related experiments that have sprung up across the ecosystem during this time, we have taken the opportunity to review elements of the model.

We will be making a series of tweaks and adjustments based on feedback from our authors, reviewers, and partners, analysis of current process outcomes, and successes with alternative approaches being tested elsewhere that we can learn from.

Peer review selection 

One of the first major areas we have focused on is the process by which reviewers are identified and selected, which will shift from being author-led to editorial-led, i.e. using internal editorial teams. We have always felt that authors are often best placed out of anyone to know the experts in their specific field. Any risk of potential conflicts is negated by the full transparency of the ensuing peer review process (including the naming of the reviewers), combined with our detailed independent checks of any author-suggested reviewers. Whilst in many cases this still holds true, we have found that for many authors, this process is a pain point.  

Researchers often choose to publish with F1000 because they want to make their work available as quickly as possible, and our rapid-publication model is designed to support that. Yet for some authors, the requirement to supply the names of potential reviewers can slow everything down quite significantly.  

If you’re an early career researcher or are in a situation that makes it harder to be so well connected into the research community, you may not already know the names of the most appropriate people in the field to review your work. For these researchers, identifying repeated rounds of experts from appropriate fields can prove a time-consuming and unfamiliar challenge, which can lead to further frustration if their suggestions then fail to meet our reviewer criteria.   

How will the new process work? 

Authors will continue to submit their articles through our single-page submission system. However, the section for providing reviewer names will now be clearly marked as optional: those that do wish to suggest reviewers can continue to do so and we will check those suggestions as normal, but this is no longer a requirement, and all authors will be able to list any reviewers they do not wish to have contacted (and why). Once submitted, the paper will undergo all the usual checks to ensure that our policies and ethical guidelines have been adhered to prior to publication.  

Once published, the F1000 in-house editorial team will get to work identifying and inviting reviewers to assess the article. They will be drawing on the expertise of our colleagues at Taylor & Francis who have extensive experience of selecting qualified reviewers across all fields. 

We will continue to operate an open peer review model: reviewer names, their reports and the authors’ responses published alongside the article, so that readers can benefit from the additional viewpoints and context.  

Having tested this process, we have found that the editorial-led peer review process significantly speeds up both the time to publication of the article prior to review (due to not having to wait for an adequate number of verified reviewer names in the system) as well as the peer review process itself. We are confident that similar significant benefits in terms of the speed of the publishing and peer review processes, as well as greater author satisfaction, will be replicated now that we have rolled out this adjusted approach across all our platforms. 

We are very grateful to members of the community who continue to provide useful feedback about their experience of F1000 platforms and inspire us with their ambitions for open research publishing. We hope this latest development to the way we work together will make the process of sharing their outputs easier for all researchers, particularly those who encounter greater obstacles to getting their voices heard.  

As I mentioned, this is the first of several such adjustments, so look out for further announcements later in the year as we continue our detailed review of the F1000 publishing model, author and reviewer experiences, and alternative approaches being tried and tested by others in the community.