An Editorial Article Can Be a Peer Review
- Editorial
- Open Admission
- Published:
What are innovations in peer review and editorial assessment for?
Genome Biology volume 21, Article number:87 (2020) Cite this article
Peer review at inquiry journals is going through a catamenia of intense innovation. Some journals are experimenting with 'open' review procedures that reveal identities or even review reports; some with pre-registered reports that shift review attending to experimental protocols rather than to focus on results; or with postal service-publication review through readership commentary [i]. Well-resourced journals embed peer review in editorial review procedures that may include text similarity scanners, language, or reference checks, or that involve low-wage sub-contracting of editorial work in highly distributed procedures. With increasing IT support and editorial sectionalisation of labour, peer review is but i link in the chain that guards, selects and improves manuscript quality, as function of editorial procedures that are now more diverse than always—fifty-fifty though the bulk of inquiry journals still uses fairly standard peer review procedures and more radical innovations are limited to a few research niches [2]. How can we larn from all these innovations?
Diversity in expectations
An important driver of electric current editorial innovations is a set of diverse and occasionally incongruous expectations. Perhaps nigh telling in this respect is the question of whether peer review is only meant to distinguish correct from incorrect research or whether it should as well distinguish interesting and relevant from less important or even picayune research. Loftier-volume journals such equally the PLoS serial ask their reviewers to merely assess whether reported results are correct, not whether they are novel or earth-shattering. As a upshot, these journals publish very big numbers of open admission articles, with relatively moderate Author Processing Charges. On the other stop of the spectrum, journals like Nature or Science volition not publish even the most solid research without important news value for their wide and interdisciplinary readership. Should peer review distinguish betwixt important and less of import findings? The grounds on which peer review and wider editorial assessment are to select papers for publication are closely related to journal business concern models.
The diversity of expectations for peer review is even bigger if we consider the variation between research fields. It is easy to slip into the research equivalent of ethnocentrism: to call back that all research fields basically work like our ain—or would be meliorate off if they did. The editorial assessment of experimental genetics is quite a unlike matter from the assessment of a climate model, a mathematical proof, a geological measurement, or even further afield: qualitative social science. The scholarly publication organization caters for a wide range of research endeavours. The growing diversity of publication practices and the specific means in which these assess the value of contributions should come as no surprise.
Replication and misconduct
Other concerns driving peer review innovations take included the 'replication crisis': the worry that many published results appear hard to replicate and that this endangers the very core of the scientific endeavour [3]. Improved peer review and improved editorial procedures in which peer review is embedded are also seen every bit a fashion to make sure that what gets published is too truly reliable.
Unreproducible research may not necessarily be wrong, but simply incompletely reported. Hence, various initiatives accept been developed to increase the detail in research reports, in particular with respect to methods. These include checklists for biomedical research materials [four], for the adequacy of animal enquiry reports [5], instructions to improve materials' identification [6], or to improve enquiry materials' validation [7]. Such initiatives may provide extra data allowing peer reviewers and readers to verify reported results, but may also act as nudges to authors, or every bit publication checks used direct by editorial staff (rather than peer reviewers).
Instead of relying entirely on the personal expertise of reviewers, checklists and publication guidelines aim to improve the scientific tape through proceduralisation: researchers are expected to improve the reproducibility or even reliability of their work by having to provide detailed methodological information. For case, methodological publication guidelines may non just encourage researchers to more than adequately study the identity of research animals, antibodies, or cell lines. Some concerned commentators as well hope this will actually raise the standards of beast testing (such as through randomisation or blinding), improve the validation of antibodies, or eradicate the festering problem of misidentified jail cell lines [8].
Even more alarming reasons for editorial innovations take been based on worries over research fraud. While it can be argued that peer reviewers or even editors cannot be held accountable for malicious practices of their authors, checks for plagiarism, indistinguishable publications, statistical information manipulation, or image doctoring exercise suggest at least some responsibility is expected from and taken by journals. This responsibleness extends to clear and forthright activeness after problematic publications have been discovered, such as through retractions, the large majority of which involve misconduct [9]. While the expectations may be loftier for editors to take action against fraud, from retracting papers to alarm authorities or host institutions, this may besides put a considerable additional burden on editorial offices. This is especially the example since misconduct may not ever be clear-cut and allegations may be challenged past the accused, who are also entitled to off-white treatment and protection from slander.
Editorial innovations in response to replication and misconduct concerns are as well stimulated by the affordances of it or shifts in publication business models. On the affordance side, electronic publishing and booming information science resource have facilitated the development of text similarity scans, with an expansion from applications in the policing of pupil plagiarism to scientific publishing. In a like vein, semi-automated statistics scanners and tools to flag falsified or copied images are now in development. Here likewise, commercial considerations play a role. Advertised equally a mode to improve the quality of published inquiry, scientific publishers tin also deploy such technology-supported editorial checks equally justifications for relatively costly publishing formats, in the confront of looming community-managed open access initiatives ranging from pre-print servers to meta-commentary initiatives such as PubPeer.
Unclear efficacy
Much as innovations in editorial procedures are advocated by scientists and publishers on a mission to raise research literature standards, the testify for the efficacy of these innovations is patchy and sometimes even contradictory. Some of the innovations move in opposite directions: increasing objectivity of reviews can be presented equally a reason for increased anonymity, merely too for revealing identities of all involved. 'Double bullheaded' reviews (or fifty-fifty 'triple bullheaded', if author and reviewer identities are anonymised to editors) are expected to encourage reviewers and editors to focus on content, rather than to exist influenced by authors' identities, affiliations, or academic power positions. Inversely, revealing identities, or even publishing review reports, tin also exist presented as beneficial: as a form of social control making reviewers accountable, in which it is non possible to hide improper reviews behind anonymity, or in which the wider inquiry community tin can go on a vigilant middle. The key question in the blindness-versus-openness contend has been what constitutes the best way to neutralise bias or unfairness based on personal dislike, power corruption, disproportionate respect for/corruption of authority, rudeness, gender, institutional address, or other social processes that editorial fairness is expected to neutralise. So far, no conclusive prove has been presented for the superiority of either strategy.
A like shortage of evidence is witnessed in the example of journals' methodological guidelines and reporting standards. While guidelines and checklists may improve the identification of research materials in published papers, guidelines do not piece of work by themselves. Guidelines require agile implementation by journals and some degree of support from the research community on which journals rely for the continued submission of manuscripts. For example, journals cannot police scientific rigour beyond what their research constituency as a whole is willing to provide. In the face up of publication pressures or the costs of extra validation testing, improved reporting seems to focus on more easily fixable identification rather than deeper validation of enquiry materials. Furthermore, if researchers provide antibiotic validation information, this also requires expertise on validation procedures amid reviewers or editors, which may not be obvious in all fields using antibodies equally enquiry tools. (For similar reasons, some journals at present piece of work with statisticians as part of a growing specialisation in review to cover specific methodological issues.) Such guidelines need to be well-embedded and enforced if they are to fundamentally improve methodological procedures.
The publishing landscape
The vivid diverseness and innovation in editorial policies creates exciting opportunities to acquire from each other. The employ of checklists and other reviewer instructions, specialisation of reviewers, postal service-publication review and correction practices, and similar innovations may well be of far wider apply than the journals that are currently experimenting with them. One status for learning is that editorial cess is visible and transparent [ten]. It is quite puzzling to see how many journals still just announce that they 'use peer review to assess papers', equally if that explains how papers are handled. Another condition is that innovation processes take to respect the variety of enquiry cultures. For example, big publishers, catering for a wide range of research fields, are well aware that one size does non fit all: there is not ane best way to organise editorial assessment, but this should not preclude possibilities to attempt out innovations that seem to work well elsewhere.
More systematic evaluation of how innovations change editorial cess would certainly likewise help this learning process. Nonetheless, given the wide range of expectations and motivations involved, evaluating the furnishings of editorial innovations is complex. For instance, whether unmarried or double blind is 'ameliorate' is non simply a matter of whether more errors are filtered out, just likewise of fairness (gender, institutional address), of whether the more significant papers are (or should be) selected, whether reproducibility is improved, whether fraud is traced, and all these other mixed or even incompatible expectations.
Moreover, the possibilities for editorial improvement practice not nowadays themselves in a void. Reasonable if complex arguments accept to be measured against systemic realities of the research world. A prominent gene here is publishing economics. After a wave of concentration in the research publishing industry [11], the large publishers are now developing strategies to survive and thrive in the age of 'open scientific discipline'. While science policy is pushing for open up data and open access publishing, some publishers aim to develop new business organisation models based on indicators, databases, and similar uses of meta-data in search engines and research assessment tools. Their willingness to prefer editorial innovations depends on their strategic choices and business organisation models, which seem increasingly focused on turnover, efficiency, and advanced division of labour in highly structured and automated publication management systems.
Another context that atmospheric condition our options for innovation is the research evaluation system: how we assess scientific achievements, laurels career advocacy, or distribute resources between research institutes and teams. Unfortunately, the evolution of publication-based indicators (such as publication counts, citation counts, h-factors, or bear upon factors) has pushed the enquiry publication system to its limits. Many researchers now submit papers 'to become a publication', spurred on by tenure-track criteria, competitive job pressure, and sometimes even considerable financial bonuses—and quite understandably so, equally their careers as scientists may depend on it. Young researchers demand to 'score' with prominent publications, and our journals need to cater for this too, at least for the fourth dimension being. While the obsession with 'output measurement' has spread from the Anglo-Saxon world to emerging research cultures such equally China, where it has now taken perhaps its most extreme form [12], even metrics developers are coming to their senses and are advocating inquiry evaluation that returns to 'quality over quantity' [thirteen], but this will take time.
Reflecting on a future of careful editorial assessment and meaningful peer review therefore also requires us to break and remember near what is at stake in how we share our inquiry findings. Do we always need the loftier-speed production of factoids, the citation-scoring career-boosting mediated-but-hastily-published papers that end up needing corrections further down the line? Or is there something to be said for slowing downward, in a research globe that aims more at cooperative advancement of cognition rather than 'scoring'? The daily practice of how we run and effort to improve our journals reflects these large questions as much as the small, technical ones.
References
-
Horbach SPJM, Halffman W. The changing forms and expectations of peer review. Res Integr Peer Rev. 2018;3:8.
-
Horbach SPJM, Halffman Westward. Journal peer review and editorial evaluation: cautious innovator or sleepy behemothic? Minerva. 2019. https://doi.org/x.1007/s11024-019-09388-z.
-
Baker M. ane,500 scientists lift the lid on reproducibility: survey sheds light on the 'crisis' rocking research. Nature. 2016;533:452–4.
-
Marcus Eastward. A STAR is born. Cell. 2016;166:1059–60.
-
Kilkenny C, Browne WJ, Cuthill IC, Emerson M, Altman DG. Improving bioscience research reporting: the ARRIVE guidelines for reporting brute inquiry. PLoS Biol. 2010;viii:e1000412.
-
Vasilevsky NA, Brush MH, Paddock H, Ponting Fifty, Tripathy SJ, LaRocca GM, Haendel MA. On the reproducibility of science: unique identification of research resources in the biomedical literature. PeerJ. 2013;1:e148.
-
Bordeaux J, Welsh AW, Agarwal South, Killiam East, Baquero MT, Hanna JA, Anagnostou VK, Rimm DL. Antibody validation. BioTechniques. 2010;48:197–209.
-
Fusenig NE, Capes-Davis A, Bianchini F, Sundell S, Lichter P. The need for a worldwide consensus for prison cell line authentication: Feel implementing a mandatory requirement at the International Journal of Cancer. PLoS Biol. 2017;15:e2001438.
-
Fang FC, Steen RG, Casadevall A. Misconduct accounts for the majority of retracted scientific publications. Proc Natl Acad Sci. 2012;109:17028–33.
-
Annunciation on transparent editorial policies for academic journals [https://www.ru.nl/scientific discipline/isis/research/transparency-declaration/]. Accessed 30 Mar 2020.
-
Larivière V, Haustein Due south, Mongeon P. The oligopoly of bookish publishers in the digital era. PLoS One. 2015;10:e0127502.
-
Quan W, Chen B, Shu F. Publish or impoverish: an investigation of the monetary reward system of science in China (1999-2016). Aslib J Inf Manag. 2017;69:486–502.
-
Hicks D, Wouters P, Waltman 50, de Rijcke S, Rafols I. The Leiden Manifesto for research metrics. Nature. 2015;520:429–31.
Writer information
Affiliations
Contributions
The authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Admission This article is licensed nether a Creative Commons Attribution four.0 International License, which permits employ, sharing, accommodation, distribution and reproduction in whatever medium or format, as long as you lot give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this commodity are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the commodity's Artistic Eatables licence and your intended apply is non permitted past statutory regulation or exceeds the permitted utilize, you will need to obtain permission direct from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Eatables Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/i.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Reprints and Permissions
Most this article
Cite this commodity
Halffman, W., Horbach, S. What are innovations in peer review and editorial cess for?. Genome Biol 21, 87 (2020). https://doi.org/10.1186/s13059-020-02004-4
-
Published:
-
DOI : https://doi.org/10.1186/s13059-020-02004-4
lawrencemorty1992.blogspot.com
Source: https://genomebiology.biomedcentral.com/articles/10.1186/s13059-020-02004-4
Post a Comment for "An Editorial Article Can Be a Peer Review"