1. Do we need an Open Science coalition?
What exactly is Open Science? Its lack of an appropriate common definition has meant Open Science can be a variety of things; a social justice issue, part of a political capitalist regime, or a form of traditional science. But this lack of consensus leaves room for Open Science to be co-opted and even exploited, notes Jon Tennant, in his post in the LSE Impact of Social Sciences Blog.
The blog post says (quote): Open Access, a key part of Open Science, demonstrates this nicely in many respects, being based originally on foundations of knowledge freedom and equity, but now has become something academics are largely forced into with it becoming a complex maze to navigate, and which many publishers exploit for additional revenue streams. In Germany and Sweden, this lack of reconciliation between researchers/institutes and some publishers has now led to national-level cancellations of journal subscriptions due to a failure to agree on the appropriate costs and services. This divergence is something that is currently happening more broadly too. Open Science is not something under control of either the public (who largely indirectly fund research), as part of a democratic operation, it is not under the control of academic institutes, and it is not under the control of academics themselves………(unquote)
The full entry can be read Here.
2. Paving the desire paths of health information needs: Teaching students to edit Wikipedia
A recently published research in BMC Medical Education found that pharmacy students can improve access to quality medicines information by editing Wikipedia pages. In their post in the BMC Series Blog, Tina Brock and Dorie Apollonio describe the research performed by themselves and their colleagues and its potential dual benefits for students and society.
The blog post says (quote): Wikipedia editing was only one of many assignments the students completed in across multiple classes during that term. To compensate for these competing demands, the pharmacy students worked in small groups and received support through peer training. The results suggest that editing medicines-related Wikipedia pages as an educational activity can improve both public-facing information and student communication skills. The impact of the student edits on the Wikipedia pages was substantial; demonstrated by improvements in the accuracy and comprehensiveness of the medicines pages and increased page views, which amplified the results of the intervention. They also found that students learned more about the inner workings of Wikipedia and, as a result, many viewed this resource differently………(unquote)
The full entry can be read Here.
3. Challenges in creating open data policies for universities
As government agencies and private foundations are increasingly requiring public access to scientific publications and to the supporting research data as terms of their grants, universities struggle to meet these new requirements. Anne Mims Adrian, in her post in the Open Access Government Blog, charts the challenges in creating open data policies for universities.
The blog post says (quote): The struggle between open data and intellectual property protection lies within the confusion of who owns the data. The confusion begins with data ownership policies. Sometimes data ownership is specified within intellectual property policies, but most often data ownership is ambiguous. And for some universities, data ownership is not mentioned in their policies at all. Confounding the issue for some is the de facto mode of practice of faculty deciding how they use the data, despite policies stating that the data and work by faculty are owned by their parent universities. Because of funders' requirements and public expectations toward access to research findings and research data, universities are reacting with additions to infrastructure and support that will provide for open access and open data………(unquote)
The full entry can be read Here.
4. AI peer reviewers unleashed to ease publishing grind
Peer review by artificial intelligence (AI) is promising to improve the process, boost the quality of published papers - and save reviewers time. A suite of automated tools is now available to assist with peer review but humans are still in the driver's seat, notes Douglas Heaven, in his post in the Nature Blog.
The blog post says (quote): Many platforms, including ScholarOne, already have automatic plagiarism checkers. And services including Penelope.ai examine whether the references and the structure of a manuscript meet a journal's requirements. Some can flag issues with the quality of a study, too. The tool statcheck, developed by Michèle Nuijten, a methodologist at Tilburg University in the Netherlands and colleagues, assesses the consistency of authors’ statistics reporting, focusing on p values. The journal Psychological Science runs all its papers through the tool, and Nuijten, says other publishers are keen to integrate it into their review processes. When Nuijten's team analysed papers published in psychology journals, they found that roughly 50 percent contained at least one statistical inconsistency. In one in eight papers, the error was serious enough that it could have changed the statistical significance of a published result………(unquote)
The full entry can be read Here.
Leave a Reply