Science and Research Content

Blogs selected for Week May 14 to May 20, 2018 -



1. Guest Post: Inclusive Pricing or Inclusive of All People? Understanding What's "Inclusive" in Digital Textbook Publishing

The scholarly communications community is very familiar with the many varied meanings of the word "free" and how those definitions help shape or derail discussions. Stephanie Rosen, in his guest post in the Scholarly Kitchen Blog, discusses the varied meanings of the word "inclusive", and why we should take care in using it.

The blog post says (quote): "Inclusive access," although it is always digital, is not always accessible to people with print disabilities. There are many initiatives to create accessible textbooks, but the majority of electronic resources are not accessible: not encoded to standards, lacking description of visual content, and lacking metadata that declares this. Students who rely on accessible copies of course content may be pushed into buying copies they can not use under this model. Although "inclusive access" is currently priced lower than textbook list prices, it is not always financially accessible. And, given strict publisher controls over the content, it is not free from restrictions. Rather, it comes with more restrictions than most electronic content purchased individually or licensed through library-vendor agreements………(unquote)

The full entry can be read Here.

2. Why are AI researchers boycotting a new Naturejournal—and shunning others?

Computer science was born of a rebellious, hacker culture, a spirit that lives on in the publishing culture of artificial intelligence (AI). The burgeoning field is increasingly turning to conference publications and free, open-review websites while shunning traditional outlets—sentiments dramatically expressed in a growing boycott of a high-profile AI journal, notes Matthew Hutson, in his post in the Science magazine Blog.

The blog post says (quote): The petition, signed by many prominent researchers in AI, is more than just a call for open access. It decries not only closed-access, subscription-based journals such as NMI, but also author-fee publications: open-access journals that are free to read but require researchers to pay to publish. Instead the signatories call for more "zero-cost" open-access journals. According to Thomas Dietterich, a computer scientist at Oregon State University in Corvallis, who began the boycott last month, the purpose of the boycott is "to lower the barriers to research progress" for resource-strapped scientists. The field is moving too fast for traditional publishing, and AI's potential for both great benefit and great harm requires openness………(unquote)

The full entry can be read Here.

3. Peer Review – Authors and Reviewers – our "North Star"

Publishers recognise that peer review is a paramount concern for researchers, and yet have not addressed some of the key concerns that authors and reviewers face. In his post in the Scholarly Kitchen Blog, Robert Harington suggests that publishers need to do more for researchers to help authors, and to help reviewers understand their role as a reviewer and be recognised for their work.

The blog post says (quote): A journal that is perceived to be of high quality will likely expect its reviewers to take a much tougher approach than a journal that is looking for good research, but is not as selective. Publishers and Journal editors would do well to think about how to guide their reviewers. A reviewer may well take an entirely different approach depending on the level of review required. What would be wonderful is if a journal articulated expectations to reviewers, perhaps even providing a series of parameters and types of question a reviewer could tackle when acting as a reviewer for their journal. A reviewer is often a silent, anonymous actor in a journal’s ecosystem, so - apart from an internal sense of growth for a reviewer - it is important for publishers to find ways to overtly involve and reward their reviewers. Some publishers do this of course, providing lists of top reviewers, or badges that a researcher can apply to their emails identifying them as an active reviewer for a publisher, or journal………(unquote)

The full entry can be read Here.

4. The academic papers researchers regard as significant are not those that are highly cited

For many years, academia has relied on citation count as the main way to measure the impact or importance of research, informing metrics such as the Impact Factor and the h-index. Rachel Borchardt and Matthew R. Hartings, in their post in the LSE Impact of Social Sciences blog, report on a study that compares researchers' perceptions of significance, importance, and what is highly cited with actual citation data. The results reveal a strikingly large discrepancy between perceptions of impact and the metric we currently use to measure it.

The blog post says (quote): Citation counts also form the basis for other metrics, most notably Clarivate’s Impact Factor as well as the h-index, which respectively evaluate journal quality/prestige and researcher renown. Citations, JIF, and h-index have served as the triumvirate of impact evaluation for many years, particularly in STEM fields, where journal articles are frequently published. Many studies have pointed out various flaws with reliance on these metrics, and over time, a plethora of complementary citation-based metrics have been created to try and address various proficiencies. At the same time, they see altmetrics emerging as a potential alternative or complement to citations, where we can collect different data about the ways in which research is viewed, saved, and shared online. However, what is discussed less often is how well all of these metrics actually align with the subjective evaluation of impact and significance itself………(unquote)

The full entry can be read Here.

Leave a Reply

Your email address will not be published. Required fields are marked *


sponsor links

For banner ads click here