Blogs selected for Week September 11 to September 17, 2017 -

1. Open peer review: bringing transparency, accountability, and inclusivity to the peer review process

Open peer review is moving into the mainstream, but it is often poorly understood and surveys of researcher attitudes show important barriers to implementation. In his post in the LSE Impact of Social Sciences Blog, Tony Ross-Hellauer provides an overview of work conducted as part of an OpenAIRE2020 project to offer clarity on OPR, and issues an open call to publishers and researchers interested in OPR to come together to share data and scientifically explore the efficacy of OPR systems as part of an Open Peer Review Assessment Framework.

The blog post says (quote): As the open science agenda has taken hold, "open peer review" has been proposed as a solution to some of these problems. By bringing peer review into line with the aims of open science, proponents aim to bring greater transparency, flexibility, inclusivity and/or accountability to the process. Various innovative publishers, including F1000, BioMed Central, PeerJ, and Copernicus Publications, already implement systems that identify themselves as OPR. However, such systems differ widely across publishers, and indeed - as has been consistently noted - OPR has neither a standardised definition nor an agreed schema of its features and implementations. While the term is used by some to refer to peer review where the identities of both author and reviewer are disclosed to each other, for others it signifies systems where reviewer reports are published alongside articles……………(unquote)

The full entry can be read Here.

2. Do We Need A Self-Citation Index?

Everyone cites themselves. It is normal, acceptable academic behaviour for anyone who has published more than one paper, a way of building upon one's prior work and the work of one's colleagues. Designed to identify individuals who might be gaming their h-index score, the s-index may do more harm than good, notes Phil Davis, in his post in the Scholarly Kitchen Blog.

The blog post says (quote): Modelled after the h-index, a single metric that combines author productivity and citation performance, the s-index would calculate author self-citation scores. Given the extent that citation metrics play in the evaluation of individuals for grants, promotion, and tenure, Flatt worries that self-citation is causing unintended damage to the course of scientific research and discovery. Moreover, the benefits appear to accrue to those who abuse the citation record, further exacerbating gender and status biases. Flatt's socioeconomic pronouncements on the damage done by self-citation relies heavily on the findings of a single bibliometric study of Norwegian scientists, which itself, may suffer from confounding effects, that is, confusing causes and effects. Still, we can all probably think of authors who go to extremes to promote their own papers, whether they are relevant to the manuscript or not……………(unquote)

The full entry can be read Here.

3. Altmetric at the University of Surrey

In a latest case study, Altmetric reports on how the University of Surrey is using Altmetric tools and data to underpin their five-year research strategic plan. In his post in the Altmetric Blog, Josh Clark discusses with Dr. Abigail McBirnie, Bibliometrics Advisor at the University, about how the Altmetric Explorer for Institutions is helping them achieve their challenging targets.

The blog post says (quote): The bibliometrics service within the Library of the University use the Altmetric Explorer to provide qualitative data that is used alongside quantitative, citation based, data to provide a more comprehensive view of the engagement around research outputs produced by the University. The Explorer has also been used to support teaching. Faculty librarians use altmetrics data to evidence decisions for acquiring specific journals, making sure that budget is spent efficiently. The reporting functionality within the Explorer is used by the Library to analyse data for publications from peer institutions, discover new collaboration opportunities and compare how their outputs are performing individually, at department and at an institutional level……………(unquote)

The full entry can be read Here.

4. Does Born-Digital Mean Rethinking Peer Review?

The impact on peer review largely depends on how much the scholarship in question diverges from the traditional print monograph in form and audience. Karin Wulf, in her post in the Scholarly Kitchen Blog, discusses how peer review practices will and are shifting to accommodate long-form digital scholarship.

The blog post says (quote): Washington and Stanford have augmented or plan to augment traditional peer review to reflect the nature of the projects they're working on. Stanford requires evidence that the project's design is viable at the proposal stage. Proposals must include a prototype of the project that conveys the design strategy of the final project and one or two fully built-out sections of the project. Proposal and prototype go out to three or four reviewers: two are disciplinary specialists, one of whom ideally is familiar with digital projects, and one or two additional reviewers are format specialists. All the reviewers are asked some basic usability/design questions. Friederike also requires authors to submit write-ups guiding reviewers through these genre-busting projects……………(unquote)

The full entry can be read Here.

5. Scientific journal credibility: Would the R-factor be the way to go?

To discourage academics from publishing eye-catching but irreproducible results-researchers have put forward an ambitious new metric to rate the reproducibility of a scientific paper - the R-factor. The proposed R-factor aims to measure the factual accuracy of a paper and to solve the growing "crisis" of scientific credibility, discusses Arlyana Saliman, in the CIMS Today Blog.

The blog post says (quote): Upon publication, a scientific paper can be cited by another paper. The latter confirms, refutes or mentions the paper. The R-factor comes into play by indicating how often a claim has been confirmed. It is calculated by dividing the number of subsequent published reports that verified a scientific claim by the number of attempts to do so. In other words, the R-factor is the proportion of confirmed claims over the total number of attempts. It is rated from zero to one: the closer the R-factor is to 1, the more likely the study is true. Attaching an R-factor to a paper should give researchers "a bit of pause before they actually publish stuff" - because the metric will rise or fall, depending on whether later work corroborates their findings, asserted Josh Nicholson, one of the inventors of the metric and chief research officer at Authorea, a New York-based research writing software company. R-factors can also be applied to investigators, journals or institutions, whereby their R-factor would be the average of the R-factors of the confirmed claims that they reported……………(unquote)

The full entry can be read Here.

Leave a Reply

Your email address will not be published. Required fields are marked *

sponsor links

For banner ads click here