1. Metrics: Human-Made, but Humane?
With a grant from the Andrew W. Mellon Foundation, HumetricsHSS is a kind of meta-workshop in "rethinking humane indicators of excellence in the humanities and social sciences." A pilot phase is allowing HumetricsHSS to test a set of propositions exploring how, if an evaluative process could be rebuilt with humanities values, individuals and institutions might embrace a very different approach to assessing scholars and scholarship, discusses Karin Wulf, in her post in the Scholarly Kitchen Blog.
The blog post says (quote): The basic notion that we can and should apply standardised criteria for intellectual achievement and contribution is central to much of contemporary education. In higher education, such standardisation in the form of metrical evaluation is patently biased, as any number of studies have demonstrated. In scholarly communications, citation metrics that are often used as an indicator of quality scholarship, including in the academic tenure and promotion processes, are one of the most vexed subjects. There are technical, epistemological and philosophical issues to consider, all of them politically freighted. How to weight the importance of citation metrics, the evident bias in metrics of all kinds, and the applicability of these metrics for the humanities versus STEM fields are issues that have prompted numerous studies, reports, and reflections……………(unquote)
The full entry can be read Here.
2. Peer Review by Whom?
Rand Paul wants to add two people to every federal peer-review panel evaluating research proposals, charged with looking for value to taxpayers. Science advocates say idea would politicise federal funding of research, notes Andrew Kreighbaum, in his post in the Inside Higher Ed post.
The blog post says (quote): NIH uses a two-tiered peer-review process. The first determines scientific merit and feasibility and requires a specific knowledge base in a particular field. The second peer-review tier, known as the advisory committee review, determines the value of a proposal to the mission of NIH. The advisory committee includes both scientific experts and members of the public. Richard Nakamura, the director of the NIH Center for Scientific Review, noted that NIH does not allow applicants to request specific reviewers. And he said in an emailed statement that the agency also doesn’t entertain requests to exclude reviewers just because an applicant requests it, although they may note an "infamous public dispute" with a potential reviewer……………(unquote)
The full entry can be read Here.
3. The methodology used for the Times Higher Education World University Rankings' citations metric can distort benchmarking
The Times Higher Education World University Rankings can influence an institution's reputation and even its future revenues. However, Avtar Natt, in his post in the LSE Impact of Social Sciences Blog, argues that the methodology used to calculate its citation metrics can have the effect of distorting benchmarking exercises.
The blog post says (quote): A norm of citation metrics is that the highest cited papers are rewarded rather than being disregarded as outliers. Of course, one can also sympathise with the argument that institutions collaborating in modern forms of big science should enjoy the spoils that come with it. Yet a big issue for institutional rankings based on citation data is how to treat papers with enormous citation counts and enormous (as well as confusing) numbers of institutional affiliations. Dividing citations based on the number of times an institution appears in the author list certainly has a democratic appeal. But it also raises a different set of problems because it is dependent on the citation database used and its particular strengths and weaknesses. While the examples I provide may be on the extreme side, there is still curiosity in how the institutional citation metrics would have looked if the fractional counting tweak was not applied……………(unquote)
The full entry can be read Here.
4. Guest Post - Academics and Copyright Ownership: Ignorant, Confused or Misled?
The recent law suit against ResearchGate brought by Elsevier and the American Chemical Society follows hard upon the $15 million damages awarded to Elsevier in their recent case against Sci-Hub. These are just the latest actions in a long line of scholarly copyright wars. Elizabeth Gadd, in her post in the Scholarly Kitchen Blog, takes a look at the contradictions between scholarly culture and copyright culture, and the cognitive dissonance created.
The blog post says (quote): So why don't publishers send stronger and clearer messages to individual academics around what constitutes copyright infringement? Well, it would seem that by doing so, they would also be sending a message to academics that their interests don't align: that when it comes down to it, publishers are primarily supportive of copyright culture and the exclusive rights it gives them, rather than scholarly culture which is something quite different. Indeed when the American Psychological Association recently issued take-down notices to various sites - including 80 university websites - which hosted 'illegal' copies of their papers, such was the outcry from academics that they quickly re-focused their attention onto 'commercial piracy sites' instead. However, it is not the sites that post these papers, it is academics themselves. By focusing on the sites as infringing rather than individual academics it further obfuscates the fact that academics' copyright practices are not in line with publishers' copyright policies……………(unquote)
The full entry can be read Here.
Leave a Reply