1. How likely are academics to confess to errors in research?
Five years ago, the "ground opened up" beneath Richard Mann. Then a junior postdoctoral researcher at Uppsala University in Sweden, he was in the middle of a two-month visit to the University of Sydney in Australia and was due to give a seminar about a research paper that he had published recently. The paper was by far the most significant of his fledgling career, and the culmination of 18 months of hard work. Holly Else, in her post in the Times Higher Education Blog, explores the emotional, reputational and practical barriers to correcting mistakes.
The blog post says (quote): Exactly how many mistakes are in the scientific record is unknown. Published studies estimate that mistakes account for between 20 per cent and 30 per cent of all retractions in the biomedical and life sciences. However, only about 0.02 per cent of papers are ever retracted. The figure is rising, but it is unclear whether this reflects greater vigilance or a higher incidence of error and misconduct. Daniele Fanelli, a senior research scientist at Stanford University and an expert in research ethics, notes that even just 20 years ago most journals did not have retraction policies. A third of high-impact biomedical journals still lack them. Research published in 2015 about the psychological literature suggests that one in eight papers has some inconsistency in reported statistics, but these are only the mistakes that are caught: many more may slip under the radar.............(Unquote)
The full entry can be read Here.
2. Guest Post: Perry Hewitt - Bringing the Cathedral to the Bazaar: Academic Content and Wikipedia
Oxford Dictionaries named 'post-truth' 2016's international word of the year, based in part on a 2,000 percent year-over-year increase in the word's usage. Many are wary of consuming information online in a digital environment rife with fake news and clickbait headlines. Scholarly publishers and librarians have a responsibility to help readers overcome their fears, delivering high-quality, reliable content and going the extra mile to make truth widely accessible. In her guest post in the Scholarly Kitchen Blog, Perry Hewitt discusses JSTOR's efforts to create and disseminate peer-reviewed scholarship to inform the post-truth world.
The blog post says (quote): For five years, JSTOR has run a program providing free access to the archives of thousands of humanities and social science journals to active Wikipedia editors, the volunteers who write and edit the Wikipedia articles we all read. We do this to inform Wikipedia articles with peer-reviewed research; as the xkcd comic above reminds us, there are a lot of citations needed. This year, 500 editors have access, and JSTOR citations are found on articles about everything from Aeschylus to Scilloideae. Since Wikipedia metrics tracking began in 2014, they have seen over 29,000 references from journals on JSTOR added to Wikipedia articles. Additionally, through the Register & Read no-cost access program, many of these articles are available for anyone to read online for free. In 2016 alone, 1.4 million people took advantage of this reading program from links across the web, including Wikipedia.............(Unquote)
The full entry can be read Here.
3. Free Internet based paraphrasing tools: further threats to academic integrity
Paraphrasing tools or article 'spinners' are free Internet sites which use computer programs to change writing so it looks different to the original text. In her post in the BioMed Central blog, Ann Rogerson discusses the study published in the International Journal for Educational Integrity, which shows that the machine outputs are of poor quality, cannot be trusted and there are ways of detecting their use.
The blog post says (quote): Paraphrasing is an important skill for everyone to develop. Paraphrasing shows how well someone understands other person's ideas. It is also used to clarify meaning in conversations inside and outside of workplaces. Using online paraphrasing tools prevents people from learning and developing this important skill which may ultimately impact of their future careers. The study took text from an existing publication and processed it through two online paraphrasing tools to examine the output. The output was further tested by seeing if a text matching machine could detect that the work was actually copied. The paraphrasing output was poor in terms of word choice and grammar while the text matching machine had difficulty in matching the output with the original work. There were some clues that can assist in detecting their use.............(Unquote)
The full entry can be read Here.
4. Embedding open science practices within evaluation systems can promote research that meets societal needs in developing countries
Researchers' choices are inevitably affected by assessment systems. This often means pursuing publication in a high-impact journal and topics that appeal to the international scientific community. For researchers from developing countries, this often also means focusing on other countries or choosing one aspect of their own country that has such international appeal. Consequently, researchers' activities can become dislocated from the needs of their societies. In their post in the Impact Blog, Valeria Arza and Emanuel López explain how embedding open science practices in research evaluation systems can help address this problem and ensure research retains local relevance.
The blog post says (quote): Researchers' choices are inevitably affected by assessment systems. Current science evaluation systems have moved towards the use of quantitative indicators, with researchers assessed based on how much they publish and the perceived quality of these publications; with the journal's impact factor or similar citation metric used as a proxy for 'quality'. On the face of things, this appears justified as publications are the main outcomes of scientific research and scientists validate good quality research by citing it. By following these incentives, researchers understandably aim to publish articles in highly ranked journals. Editors' behaviour is also driven by their incentive to maximise impact factors. They must assess whether the research topic will trigger sufficient worldwide interest to become widely cited.............(Unquote)
The full entry can be read Here.
5. Altmetrics for books: a guide for editors
In 2015 the Altmetric team embarked on an exciting project: to develop and build Bookmetrix, in partnership with Springer Nature. Bookmetrix went live in April 2015 and marked a new beginning in the way that authors, publishers and editors can understand and interpret the online activity that takes place around the books that they produce. In 2016 the Altmetric team extended this more broadly, launching Badges for Books in April. Cat Williams, in her post in the Altmetric Blog, looks at what are altmetrics for books and why do we need them?
The blog post says (quote): Identifying the volume and type of engagement with a book or series is critical to any successful publishing strategy, and getting insight into this as early as possible can help you find opportunities to further promote popular items amongst new audiences. You might notice, for example, that a new publication is receiving attention from somewhere or someone unexpected, or even that a older work is gaining traction again. It might also be useful to take a look at the attention other books of similar topic published by your competitors are receiving. This can help you identify new channels and strategies for increasing the visibility of your own monographs. For book authors, who to date have had little in the way of data to demonstrate the reach and potential broader influence of their work, the information that altmetrics provide can be used as great evidence for their CV or funding applications.............(Unquote)
The full entry can be read Here.
Leave a Reply