1. Do You Know About Yewno?
A growing number of scholarly communications tools and services are using artificial intelligence. In her post in the Scholarly Kitchen Blog, Alice Meadows discusses about one such tool, Yewno, in an interview with their co-founder and Chief Business Development & Strategy Officer, Ruth Pickering.
The blog post says (quote): Yewno Unearth addresses portfolio categorisation challenges, but also can help inform acquisitions editors about their own content, helping them to spot gaps in their list and align to courses where relevant. The categorisation tool can help marketers understand the content within a single title with more clarity and detail, and also help them to select relevant content for cluster promotions targeted to specific audiences. It can be challenging for sales representatives to find an accurate list of titles pertaining to a particular course or topic that a customer is interested in, but by using Unearth they are able to quickly search all content to find the relevant content regardless of a publisher’s static taxonomy. This is particularly helpful when describing the content of a backlist, where metadata may be missing or flawed………………(unquote)
The full entry can be read Here.
2. New science data-sharing rules are two scoops of disappointment
After more than a year of deliberation, editors of some of the world’s leading biomedical journals have come up with a declaration on data sharing destined to usher in a glorious world of transparency and rigor in science. What they have come up with is a policy with a couple of weak sticks and no carrots, notes Adam Marcus and Ivan Oransky, in their post in the STAT Blog.
The blog post says (quote): A major impetus for data sharing is the belief that giving outside researchers access to results is critical to improving the rigor and reproducibility of science. But the document inexplicably is silent on reproducibility. The word does not appear at all. Failure to making data sharing mandatory for acceptance of papers raises the prospect that some authors might decide to ignore the hint. In fact, some scientists might reasonably interpret the wishy-washy policy as saying: The sexier your findings, the less important sharing data is - which not only won’t promote sharing but will exacerbate an already significant problem with science publishing. The plan also will require, as a condition of consideration for future publication, that any clinical trial that starts enrolling subjects on or after January 1, 2019, includes a data-sharing plan when registering the study………………(unquote)
The full entry can be read Here.
3. Are Open Access Journals Immune from Piracy?
There is little doubt that piracy of subscription or member-only access content is damaging to publishers and societies. Does the same hold true for open access journals? In her post in the Scholarly Kitchen Blog, Angela Cochran explores some of the dangers piracy poses to open access content.
The blog post says (quote): No matter what the business model is, all journals want to keep users on their pages as long as possible. Some OA journals sell advertising to supplement income. Others may try to “upsell” services and would use notifications on the site to tell users about those services. All of this is lost to those that use Sci-Hub to access OA content. Big OA publishers such as PLOS and BioMed Central sell advertising on their sites and depressed usage due to piracy will hinder those efforts as well. Advertising income could certainly be used to help keep article processing charges lower or support waivers for APCs………………(unquote)
The full entry can be read Here.
4. “The research does not change anything; it’s the research that changes you”.
PhDs can be a grueling undertaking, requiring years of hard work and dedication. In her post in the BioMed Central Blog, Sima Barmania reflects on completing her PhD and its impact on her and the policies that she hopes to influence.
The blog post says (quote): Perhaps there are limits to academia, I wonder whether the focus on publications rather than participatory research helps or hinders. Academia is set within a context of social, historical and political elements and within this reside stakeholders, policy makers, government and last but not least people; those at risk, those who are affected and those vulnerable- those whose lives we are purportedly trying to improve in the first place. Maybe it is all too soon, some recall from their experience that it took 10- 20 years before demonstrable change on the ground resulted, especially in the third sector………………(unquote)
The full entry can be read Here.
5. Minor, substantial or wholesale amendments: it’s time to rethink changes to published articles and avoid unnecessary stigma
The present system of labelling changes made to published articles is confusing, inconsistently applied, and out of step with digital publishing. It carries negative connotations for authors, editors, and publishers. Is there a way to efficiently and neutrally flag a change to a published article in a way that says what happened that is separated from why it happened? In their post in The Impact Blog, Virginia Barbour, Theodora Bloom, Jennifer Lin and Elizabeth Moylan propose a new system for dealing with post-publication changes that focuses on moving away from the current, confusing, stigmatising terms, differentiating the scale of changes, and differentiating versions of articles.
The blog post says (quote): There are negative connotations involved for all parties (authors, editors, publishers) associated with making any post-publication change, but this is particularly the case with retractions – even when done for reasons that are entirely laudable. Moreover, on many occasions it can be clear that an issue has arisen with an article which readers need to be alerted to but editors and authors have to wait for the outcome of an institutional investigation, leading to significant delays to any posted changes. Is there a way to efficiently and neutrally flag a change to a published article (i.e. what happened) which is separate from the cause (i.e. why it happened)?……………… (unquote)
The full entry can be read Here.
6. What Is “Open Science”? (And Why Some Researchers Want It)
There is a movement within the scientific community that asks for greater collaboration between research teams. The idea is that with greater access to information, more people working separately on the same problems can solve them more efficiently and with the greatest transparency, notes Elizabeth Gilbert and Katie Corker, in their post in the Futurism Blog.
The blog post says (quote): When scientists share their underlying materials and data, other scientists can more easily evaluate and attempt to replicate them. Open science can help speed scientific discovery. When scientists share their materials and data, others can use and analyse them in new ways, potentially leading to new discoveries. Some journals are specifically dedicated to publishing data sets for reuse (Scientific Data; Journal of Open Psychology Data). A paper in the latter has already been cited 17 times in under three years - nearly all these citations represent new discoveries, sometimes on topics unrelated to the original research………………(unquote)
The full entry can be read Here.
Leave a Reply