Postdocs meet publishers: what can be done to make publishing better for ECRs? - F1000

Postdocs meet publishers: what can be done to make publishing better for ECRs?

Early Career Researchers (ECRs) occupy a distinct – and often challenging – role within the research community. 

They are at the forefront of the latest technology and practices, are a vital driver of innovation, and arguably carry out the lion’s share of the groundwork in study design and data collection. Their contribution is integral to both progress and production in research. However, by working in an environment with seismic shifts in education delivery methods, greater workload pressures,less research funding, fewer professional opportunities, and the pressure to ‘publish or perish’, early career researchers can feel like they’re doing more work for less reward. This is reportedly enough to push promising talent out of the field in droves: a recent survey found 74% of researchers are likely to leave the field in the next five years

With such a vital role in the research ecosystem, publishers possess the power to really help shift this dynamic. With this in mind, there was a two-day symposium before the 2023 Academic Publishing in Europe Conference (APE) which brought together both publishers and ECRs to ask the question: what can be done to make publishing work better for researchers, publishers, and science communication? Following this, and to close the APE 2023 conference, five participants from the symposium held a live panel discussion in which they outlined the key arguments and takeaways. Below is a summary from this conference session. 

A call to reform research assessment

It was widely acknowledged that papers published in high impact factor journals can elevate the status of an author and their institution, possibly leading to more funding for a department or area of research, as well as heavily influencing whether a researcher makes progress on the track to tenure. In principle, this should reward researchers. However, it was argued that this dynamic has a cobra effect: often known as the “publish or perish” culture, the quality of a researcher’s activity became secondary to the quantity of studies they publish.  

Since the San Francisco Declaration on Research Assessment (DORA) made its mark over 10 years ago, the need for research assessment reform has gained traction year on year. The call to re-evaluate research assessment recognizes a cultural reliance on “inappropriate” metrics to evaluate research impact. DORA’s key tenet is to “not use journal-based metrics, such as Journal Impact Factors (JIFs), as surrogate measures of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions”, rather the importance is to be placed on the intrinsic value of research and its impact in society. The Declaration also aligns with the transition towards an open research environment. The key aim is to evaluate research and researchers on the merits of their research as a whole. 

Whilst the idea of moving away from the JIF as being the sole indicator of a research paper’s worth was widely praised, being able to put this into practice was argued as another matter entirely. Johannes Wagner (Copernicus) said of this, “what struck me most was a lot of demand for more inclusion…there was a lot of demand [from ECRs] for recognizing outputs alternative to classical research papers…and there was the sense that this was not recognized.”  

Wagner also recounted challenges he had experienced first-hand in modernizing processes in publishing. He described instances where publishers had invested in making progressive changes – for example, building new infrastructure to support open research practices – but then saw limited uptake among ECRs. He argued that to truly see research assessment reform, researchers also need incentives to adopt new methods and new behaviours, like using new infrastructure or publishing different research outputs. For example, if open data is seen as a ‘nice to have’, but not mandated, then there’s far less incentive for often overworked, overburdened, and struggling ECRs to open, clean, standardize and publish their data transparently.  

As Bernd Pulverer (EMBO reports) noted, research assessment reform is a missing ingredient. However, he also noted that shifting longstanding norms, like which metrics should be used to assess research, has historically been a “glacial” process. 

Better tools for better outputs

ECRs and publishers agreed in one area: the processes and infrastructure that underlie the publishing machine need to change.  Publishing is rife with inequalities, and — according to participants at the symposium — the current systems exacerbate them. Article Processing Charges are seen as prohibitive, as some high fees lock out qualified researchers and their work if they lack substantial funding. The dominance of English as the publishing language can hamper the production and dissemination of global knowledge. Closed peer review systems are seen as outdated and unfit for purpose, with a lack of transparency a breeding ground for reviewer bias, subjectivity in deciding value, unethical practices and more. In contrast, open research practices, like sharing protocols, data, code, and open peer review were suggested as potential ways to increase integrity and reduce bias. 

Some existing processes were however lauded. Infrastructure like the creation and maintenance of common and permanent identifiers (DOIs and ORCID, for example) was seen to better facilitate knowledge sharing and networking, as well as being much easier to use. Including supporting documentation such as data availability statements or competing interests were welcomed as better safeguards for more ethical practices. More current developments were seen as promising, if not in need of improvement or backed by more promotion: for example, name change policies, training regarding diversity and bias in peer review, inclusion statements, transparent peer review and the increased ability to publish of a range of research outputs beyond the research article. 

A seat at the table

ECRs highlighted that in order to be able to really influence the necessary changes in research assessment reform, or even progressive changes in publishing, they needed more opportunities to be heard and to make a difference. Publishers agreed that including ECR voices and building in regular opportunities for dialogue would be key to bridging the gap between the two.  

For example, both Pulverer and Maia Salhoz-Hillel (QUEST) advocated for the idea of having ECRs involved in editorial boards. This has a dual benefit: ECRs gain valuable experience in the world of publishing, and publishers benefit from invaluable insight into ongoing changes in user needs, preferences, and opinions. 

As part of this dialogue, Johanna Havemann, founder of AfricaArXiv, pointed out the danger in speaking of ECRs as a homogenous group. Inclusion efforts needed to consider the diversity between researchers; ECRs from different regions and across different disciplines will have different needs and opinions. For example, Havemann highlighted that researchers in the global south have very valuable contributions to make to the scientific canon and to the conversation about progress and infrastructure. However, they may be working under distinctly different conditions, with fewer resources. Today’s open calls for unpaid contributors to advisory boards, workshops and even scholarly work, undoubtedly adds even more burden to researchers in these areas. Havemann emphasized that addressing this disparity would require active thought and efforts to inclusion. 

Havemann also raised the important point of representation being vital to furthering diversity and inclusion, “…should a non-African white person be the one representing the organization?…It would be possible to Zoom someone in from Nigeria or South Africa or Kenya,” adding, “I would like to invite us all to acknowledge, realize, and inform ourselves, all of us – whichever part of the world we have worked and live in – that history happened.”  


An evolving partnership

Overall, whilst there was considerable overlap in what ECRs and publishers both wanted to see for the future of scholarly communications and the scholarly ecosystem as a whole, the gap between the two groups was still apparent in the ECR panel discussion, and at APE 2023 overall. In order to continue to tackle these challenges, there was a consensus that there needed to be cross collaborative effort in all areas: assessment, inclusion, technology, and research itself. All stakeholders – researchers of all stages, publishers, and evaluators — have crucial parts to play in advancing research to benefit the scholarly community and science. 

Want to learn more? Find out how publishers and funders can help researchers choose open science and explore how research assessment reform can support better science. 

No replies to this post yet

Leave a Reply

Your email address will not be published. Required fields are marked *