1. Data Analysis: How Effective Is TrendMD?
A reanalysis of TrendMD experimental data reveal details on its effectiveness, novelty, and bias. A recent paper authored by the founders and employees of TrendMD, a recommender system for academic content, reported an overall citation benefit of 50% — a net gain of 5.1 citations within 12 months. How well does TrendMD work to drive eyeballs and citations to your journal? It depends on how you analyze the data, notes Phil Davis in this post in the Scholarly Kitchen Blog.
The Blog post says (quote): Placed in context, TrendMD may have developed a highly effective tool for driving traffic to relevant content. What makes it different from other products with recommendation systems (PubMed, Scopus, Web of Science, ProQuest, and EBSCO, among others) is that TrendMD operates from within the content itself, obviating the need for keeping readers engaged in a large siloed platform. This platform-neutral model allows TrendMD to grow organically and, in theory, benefit from scaled network effects. TrendMD also allows customers to actively promote their content in the recommendations, similar to paid search ads showing up within Google’s organic results or sponsored product recommendations on Amazon. Publishers should take interest, only cautiously and skeptically............ (Unquote)
The full entry can be read: Here.
2. The Mess That Is Science Publishing
Researchers have been grumbling about the state of scientific publishing for years. Now, rumor has it that the Trump administration may be trying to fix at least one problem: access to reports of government-funded research. The rumored proposal will require free, immediate access to all reports of government-funded scientific research. The rumor is credible enough that an association of 210 academic and research libraries has written to the president in support of the idea. The research-publication system is a mess, and open access would be one small step toward a fix, notes John Staddon in this post in the James G. Martin Center Blog.
The Blog post says (quote): The production costs for setting up a new journal are much lower than they would have been thirty years ago. Peer reviewers and editorial board members have rarely been paid, and even editors receive only modest stipends. And now authors can do digital typesetting and copyediting—once the responsibility of the publisher—with a little help from their home institutions. Cost is not an obstacle to online publishing............ (Unquote)
The full entry can be read: Here.
3. Global Science, China’s Rise, and European Anxiety
While so many strive to create a global system for science and scientific publishing, history is still alive and geopolitics still matters. While some talk about global science, China’s skyrocketing investment in its scientific sector is causing real anxiety for Europe, notes Roger C. Schonfeld in this post in the Scholarly Kitchen Blog.
The Blog post says (quote): Understandably, Europe feels very protective of its scholarly publishing sector, which, especially when UK activity is considered, has been the global leader. And in light of Brexit, there are additional reasons not only for British but also European anxieties about what the future may bring this sector. But while the US may long have been fine to import scholarly publishing services from Europe, the growth in scientific research in other regions — especially but not exclusively China — suggests different futures ahead. There is every reason to believe that, even if science continues to globalize, a strong Chinese scholarly publishing sector will develop............ (Unquote)
The full entry can be read: Here.
4. CRediT Check – Should we welcome tools to differentiate the contributions made to academic papers?
Elsevier is the latest in a lengthening list of publishers to announce their adoption for 1,200 journals of the CASRAI Contributor Role Taxonomy (CRediT). Authors of papers in these journals will be required to define their contributions in relation to a predefined taxonomy of 14 roles. In this post, in the LSE Impact Blog, Elizabeth Gadd, weighs the pros and cons of defining contributorship in a more prescriptive fashion and asks whether there is a risk of incentivising new kinds of competitive behaviour and forms of evaluation that doesn’t benefit researchers.
The Blog post says (quote): CRediT aims to ensure that everyone attributed on a paper gets recognised for their contribution. As such it goes one step further than guidance and provides a structured way for authors to declare their various contributions. It lists 14 contributor roles, some of which you might expect (writing, analysis) and some of which you might not (supplying study resources and project admin). And whilst it won’t stop someone being named who should not be named, nor will it ensure that everyone is named who should be named, it does make omissions a bit more difficult – and for this it has been highly praised............ (Unquote)
The full entry can be read: Here.
Leave a Reply