Blogs selected for Week September 18 to September 24, 2017 -



1. What does transparency in peer review mean to you?

As part of 2017 peer review week celebrations, Alice Meadows (ORCID) chaired a panel debate with Irene Hames (Peer review and publications ethics expert), Andrew Preston (Publons), Carly Strasser (Gordon & Betty Moore Foundation) and Elizabeth Moylan (BMC) on what transparency in peer review meant to them. In her post in the BioMed Central Blog, Elizabeth Moylan discusses her perspective from the discussion.

The blog post says (quote): According to the author, transparency refers to the particular peer review model a journal uses, with transparent peer review specifically meaning that reviewer report content (but not reviewer names) accompany publication of an article. EMBO and Nature Communications practise this and Genome Biology recently announced a trial. However, there are various "shades of transparency". Some journals share reviewer names but not the reports (e.g. Frontiers and Nature). eLife publish a synthesised decision letter sometimes with reviewer names (if reviewers allow). Arguably open peer review - where reports are signed and accompany publication (as practiced on 70 BMC journals from 1999 onwards), is the most transparent form of peer review. Open peer review makes reviewers and editors accountable, and reports can be cited giving reviewers recognition for the work they do…………… (unquote)

The full entry can be read Here.

2. New Assessment Process Boosts Credibility of Developing-world Journals

The issue of assessing the quality of journal publishing processes is one that has vexed researchers and the publishing industry for many years, with the proliferation of online journals, open access approaches and different standards, metrics and expectations. New detailed assessments of journals in the Global South will provide reassurance to authors and readers and guide editors on how to improve their journals, notes Siân Harris, in her post in the Scholarly Kitchen Blog.

The blog post says (quote): Following a rigorous assessment process, journals are awarded one of six badges. These badges will inform and reassure authors and readers that journals meet international standards to a particular level. More uniquely, the framework will also provide guidance to journal editors on how they should improve their publishing processes. Editors can take advantage of various resources from INASP and AJOL to help them develop their publishing practices and then resubmit their journals for assessment after six months to a year. The process is initially focusing on the 900+ journals across the six JOL platforms (Africa, Central America, Nepal, Bangladesh, Sri Lanka and Mongolia) and exploring how it might be extended to other Southern journal platforms……………(unquote)

The full entry can be read Here.

3. Taking back control: the new university and academic presses that are re-envisioning scholarly publishing

A recent report from Jisc showcases the upward trend in universities and academics setting up their own presses in an environment increasingly dominated by large commercial publishing houses. Following up on the recommendations arising from this report, Janneke Adema and Graham Stone, in their post in the LSE Impact of Social Sciences Blog, put forward some ideas on how to best support these new initiatives through community and infrastructure-building.

The blog post says (quote): What these new initiatives have in common is their focus on collaboration, where they don't see themselves as being in competition with each other. Notwithstanding this focus on collaboration and the sharing of skills and information, many of these initiatives perennially face issues around sustainability (especially if we abide by the industry definition of sustainability which has come to expect profitability in addition to self-sustainability. It could also be argued that academic monograph publishing in the humanities has never been sustainable), and often strongly rely on the labour/investments of a single individual or a handful of people. In the case of many university-led initiatives, sustainability is partly underwritten by the university in the form of a subsidy. However, many institutions still require their presses to operate in a self-sustaining way.……………(unquote)

The full entry can be read Here.

4. Science, Publishing and Government Bills: Fair Access to Science and Technology Research Act (FASTR)

Scholarly publishers are already doing much to make government funded research as free as possible as soon as it is published. Why do we need a law to enact what is already taking shape? In his post in the Scholarly Kitchen Blog, Robert Harington suggests it comes down to politics.

The blog post says (quote): CHORUS has blossomed as funding agencies investigate how to fulfill their goal of making funded research openly available to access. CHORUS essentially provides public access to published articles reporting on funded research. Why then does FASTR exist, if funding agencies are already fulfilling its goal of providing public access to articles reporting on federally funded research, working with publishers and other stakeholders to fulfill the OSTP mandate? Part of the answer lies in history. FASTR in some form, or other, has been around a long time, and legislators want to keep legislating. Such is the glacial nature of movement through House and Senate for bills that do not quite cut it…………… (unquote)

The full entry can be read Here.

5. How Does Publication Bias Affect Medical Literature?

What measures are being taken to counter publication bias? Troy Baker, in his post in the CCC Blog, discusses a new study published in Trials that shows the impact of the 2007 FDA Amendments Act (FDAAA).

The blog post says (quote): The study compared the trials for newly approved drugs in cardiovascular disease and diabetes to those trials completed before the FDAAA took effect. The result suggested that, post-FDAAA, trials were more likely to be registered and published with findings that agreed with the FDA’s interpretation. There are also a growing number of tools designed to detect publication bias in medical literature. Funnel plots are popular, and an alternative is the 'p-curve analysis', devised by Uri Simonsohn, a multidisciplinary psychologist at University of Pennsylvania. This method does not depend on a collaborative effort by authors, editors and publishers – instead it can be carried out by individuals or by small groups. But a statistical method can only go so far and there is still a long way to go. Publication bias can occur at any stage of the research-to-publication process, so a multi-pronged approach is the only way to reduce the risk……………(unquote)

The full entry can be read Here.

Leave a Reply

Your email address will not be published. Required fields are marked *


sponsor links

For banner ads click here