Blogs selected for Week October 14, 2019 to October 20, 2019 -



1. The Second Wave of Preprint Servers: How Can Publishers Keep Afloat?

Preprint servers have been growing explosively over the last ten years: over 60 platforms are currently available worldwide, and the sharing of research outputs prior to formal peer-review and publication is increasing in popularity. Building on the findings of a recent study, ‘Accelerating scholarly communication: The transformative role of preprints’, commissioned by Knowledge Exchange, Rob Johnson and Andrea Chiarelli, of Research Consulting, in their guest post in the Scholarly Kitchen Blog, consider how publishers are responding to recent growth in the uptake of preprints.

The Blog post says (quote): Preprint servers have been growing explosively over the last ten years: over 60 platforms are currently available worldwide, and the sharing of research outputs prior to formal peer-review and publication is increasing in popularity. The ‘second wave’ of preprint servers is developing quickly in areas such as biology, chemistry and psychology, and early adopters and nascent preprint servers can now be found in virtually all scholarly communities. The second wave of preprint servers has much to offer the researcher community, but those expecting it to wash away existing scientific journals are liable to be disappointed. This might be achieved through improved internal workflows, acquisition or strategic partnerships..........(Unquote)

The full entry can be read: Here

2. How to Choose the Right Organizational Model for Data Science and Analytics

In this post in the inside BIGDATA Blog, Martijn Theuwissen, a co-founder at DataCamp, highlights how the most competitive companies prioritize developing data fluency across their workforce to improve outcomes. Organizations that aren’t able to effectively make use of their available data today are already behind the curve.

The Blog post says (quote): Companies using three main models to organize their data teams: centralized, embedded, and hybrid. The first model is a centralized data science and analytics team that fields requests from other departments, commonly set up as a Center of Excellence. This model is widely used but can be problematic because it creates a silo for data tools, skills, and responsibility. The second is the embedded, or decentralized, model, where data professionals are embedded in functional teams. The third is the hybrid model, where there is a central data team and data professionals are also embedded in functional teams. The hybrid model encourages more cross-functional collaboration and can enable a strong sense of purpose for each data professional...........(Unquote)

The full entry can be read: Here

3. If we have to endure plenary + panel conferences, how can we make them better?

The default format for most academic conferences is that of a plenary presentation followed by panel presentations, Duncan Green, in his post in the LSE Impact Blog, argues that if they cannot revolutionise conference design, they can at least strive to make standard conferences and presentations better and suggests seven ways in which academic presentations could be improved.

The Blog post says (quote): Panels with people speed reading from their screens in a monotone, like an MP trying to get as many words as possible into Hansard. There’s a whole other post to be written about designing better conferences, which are starting to happen, but if the plenary+panel format remains the default these kinds of steps can only improve the experience for everyone involved...........(Unquote)

The full entry can be read: Here

4. Two Competing Visions for Research Data Sharing

In recent years, mechanisms for sharing and preserving research data have grown considerably. But the landscape is crowded with a number of divergent models for data sharing. And because these divergent approaches to research data sharing are poorly distinguished in much of the discourse, it can be a confusing landscape. Some are driven by the needs of science, some by business strategy. Two fundamentally competing visions are emerging for sharing research data. One vision is highly publisher-centric, and the second is being developed by publishers with a strategic view of how the publication system is evolving. Research funders have a substantial interest in data sharing, notes Roger C. Schonfeld in his post in the Scholarly Kitchen Blog.

The Blog post says (quote): It is clear that the reproducibility objective of the “publications vision” has significant merit in helping to ensure the integrity of the scientific record, and it appears that the “community vision” can be invaluable in advancing scientific and scholarly progress, at least in certain fields, through data reuse. Publishers have at least some modest interest in advancing the publications vision. Using the article as the basis for data sharing reinforces their value proposition...........(Unquote)

The full entry can be read: Here

5. Tech focus: publishing platforms

Publishing platforms are digital solutions designed to help publishers and authors promote and disseminate content. In its simplest form, a platform is an accessible location to host content and make it discoverable. Authors can be restricted in what they can get published due to stringent testing processes, which restrict the spread of information. Publishing platforms allow authors to share their insights on a digital platform, and promote the sharing of information with their intended audiences. A publishing platform needs to give publishers control of all the key functions that they need to run their online business and drive growth, notes Tim Gillett in his post in the Research Information Blog.

The Blog post says (quote): Platforms are largely perceived as a commodity within the scholarly publishing sector. Yet what elevates one platform above the competition is one that places the end-user customer at the forefront of the experience. Enhancing discoverability and improving access to journals for the end-user are crucial for a platform to stand out. If a user doesn’t know that a piece of content exists in the first place, there is no use for a publishing platform anyway. So by offering a platform that can encourage the discoverability, visibility and dissemination of academic content; that’s how platforms can stand out over the competition............(Unquote)

The full entry can be read: Here

6. Odyssey 2030: the future of research and publishing

The ways in which we interact and communicate are changing at an accelerating pace and, as a consequence, the way research is conducted is evolving. Research projects have become increasingly interdisciplinary and fruitful collaborations across disciplines that have not traditionally come together are becoming the new norm. The relentless flow of methodological and technical advances continues to move established fields forward and to make way for novel research disciplines. While it remains a fundamental part of the scholarly process, the way in which research is communicated and shared has to some degree been playing ‘catch up’ to these developments over the last few years, note PLOS staff Ines Alvarez-Garcia, Phil Mills and Iratxe Puebla, in this post in the PLOS Blog.

The Blog post says (quote): Odyssey 2030: the future of research and publishing, they will be discussing the future of the research ecosystem and what scholarly communication will look like in 2030 and beyond. The University of Cambridge will be celebrating the Cambridge Festival of Ideas (October 14-27) and after the excellent discussion they had last year (‘Rethinking failure and success in science’), they have decided to participate again. The perfect opportunity to explore initiatives that will drive a re-appraisal of how research is communicated..............(Unquote)

The full entry can be read: Here

7. Best practices for tracking altmetrics for your digital library content

Digital libraries contain loads of important scholarly resources: digitized primary sources like letters and illuminated manuscripts, arts scholarship like images and videos, and even interactive, peer reviewed websites. Digital libraries (also known as “digital special collections”) are older than their institutional repository cousins, and also much more complex. In this post in the Altmetric Blog, Stacy Konkiel will share some technical fixes for making it easier to find altmetrics for your digital library content, along with general resources on digital scholarship assessment.

The Blog post says (quote): Altmetric uses this information to monitor our data sources for white-listed links to your digital library content, which we then clean and aggregate into reports on how your digital library content is being engaged with across the social web. In order for Altmetric to track content shared in a digital library, we need for you to have standard persistent identifiers like DOIs or Handles assigned to your digital library content, and for you to share that information along with basic metadata in your digital library’s webpage meta tags in a supported format.........(Unquote)

The full entry can be read: Here

Leave a Reply

Your email address will not be published. Required fields are marked *


sponsor links

For banner ads click here