1. What impact evidence was used in REF 2014? Disciplinary differences in how researchers demonstrate and assess impact
A new report produced by the Digital Science team explores the types of evidence used to demonstrate impact in REF2014 and pulls together guidance from leading professionals on good practice. In their post in The Impact Blog, Tamar Loach and Martin Szomszor present a broad look at the types of evidence in use in the REF impact case studies and reflect on the association between use of evidence in the various categories and the scores received by submissions in peer review.
The blog post says (quote): This kind of analysis is broad-brush and not particularly surprising – we have counted clinical guidelines as reports and media pieces are surely closer to the research carried out in panel D than in any other panel. What it does do is show that by allowing for flexibility in the way impact is reported (case studies could cite any evidence that corroborated impact so long as it was auditable) diversity in approach and content is encouraged and realised. Here we show this at the level of main panels, but the variety and multiplicity of impact stories in the full set of case studies is backed up by a huge array of impact evidence - evidence includes technical documentation on commercial websites, social media and audience responses to public activities and a huge number of URLs pointing to unknown resources.........(unquote)
The full entry can be read Here.
2. ORCID: what, why, how?
Chances are that over the past couple of years you have started to hear more about ORCID. In her guest post in the BioMed Central Blog, Alice Meadows explains what ORCID is and how this is important to researchers, and those that work in a research organisation.
The blog post says (quote): ORCID also enables organisations to acknowledge peer review activities. Most importantly, last October Crossref, which creates DOIs for publications, introduced Auto-Update. Now, when an author uses her/his iD at manuscript submission, for example in one of Springer Nature’s systems, that iD will be carried through the publication process, included in the metadata used for indexing papers, and posted back into their ORCID record on publication. The author only needs to give permission to Crossref once and their record will be automatically updated with all future publications where they use their iD.........(unquote)
The full entry can be read Here.
3. A Multiplicity of University Publishing
There continue to be calls to consolidate all publishing activity in a single organisation or unit. The various participants in scholarly communications often are hostile to the very idea of competition. But the evidence is otherwise: a diversity of publishing venues, all operated independently, yield better and more innovative results, notes Joseph Esposito, in his post in the Scholarly Kitchen Blog.
The blog post says (quote): Over time the various venues for scholarly publishing will diverge in their form. A library sits across the lane from the university press. The library creates an open access hosting model, and the university press publishes books that are sold to individuals, libraries, bookstores, and various middlemen. The library starts out with the aim of publishing the very same kinds of books, of comparable quality, as the university press. But gradually the library begins to do different things. Perhaps the material it publishes are article-length, not long-form texts; or perhaps the library concentrates on primary source material while the university press doubles down on monographs. The nature of the business model (who pays and how they pay) comes to influence the very content of the program..........(unquote)
The full entry can be read Here.
4. Will the Monograph Experience a Transition to E-Only? Latest Findings.
Although journals, other serials, and reference have made a large scale transition away from print, we must not assume that the same path will inevitably be pursued for other components of collections. A combination of business models, reading practices, and other user needs will play the biggest role in determining the prospects for the printed monograph. Today, it seems that a dual-format environment may remain before us for some time, and there will be advantages for the libraries, publishers, and intermediaries that can develop models for monographs that work best in such an environment, notes Roger C. Schonfeld, in his post in the Scholarly Kitchen Blog.
The blog post says (quote): The key question emerging is whether we are in a dual format environment only for a transitional period or for the long term. If the reading experience for electronic monographs improves - not simple unthinking replications of the text of print monographs in digital form but a real adaptation to the complex ways they could be used - then the dual-format period may be only transitional. But if we are unable to match the print monograph reading experience in digital form, then the dual format period may extend indefinitely. For myself, I do not believe the question is about whether it is conceptually possible to create a strong digital reading experience but rather whether interests and incentives can line up to make this possible before the monograph itself declines for other reasons.........(unquote)
The full entry can be read Here.
5. Do academic social networks share academics’ interests?
In the mid-2000s, Facebook, Bebo and Myspace were neck and neck in a frenzied race to attract the most users to their fledgling social networks. A decade later, Bebo and Myspace were moribund while Facebook boasted more than 1.5 billion monthly active users and its founder, Mark Zuckerberg, had become the fourth-richest man in the world. In his post in the Times Higher Education Blog, David Matthews examines the approach of ResearchGate, Academia.edu and Mendeley to profit, user data and open access publishing.
The blog post says (quote): Elsevier is controversial among academics because of its historic opposition to open access, which many users of social networks are likely to be inclined to support. A campaign around the time of the takeover that advocated the mass deletion of Mendeley accounts even acquired its own Twitter hashtag: #mendelete. Nor did Elsevier enhance its reputation among open access aficionados when, later in 2013, it issued a series of “takedown notices” to users of Academia.edu, asking them to remove any papers to which Elsevier held the copyright.........(unquote)
The full entry can be read Here.
6. Why Is It So Expensive to Read Academic Research?
The aphorism “information wants to be free,” coined by entrepreneur Stewart Brand in 1984 at the inaugural Hackers Conference, has come to serve as a shorthand justification for an ideology that would remove all unjust barriers to information access. And information has rarely been more accessible than it is on the controversial website Sci-Hub, which offers completely free access to pretty much any academic journal article ever published, notes Justin Peters, in his post in the Slate Blog.
The blog post says (quote): The economics of academic publishing are fundamentally different from those of general book publishing, or journalism, or the chocolate industry. The latter enterprises specialise in products that members of the general public might conceivably want to purchase but don’t need. There are quite a few members of the general public, and in order to attract as many of them as possible, commercial publishers set relatively low prices for their products, hoping to realise profit through sales volume. Many online news sources, including Slate, choose to attract an audience by offering their content for free and earn money primarily from advertising rather than sales revenue..........(unquote)
The full entry can be read Here.
Leave a Reply