1. Avoiding fake journals and judging the work in real ones
The number and reach of phony scholarly journals that will publish almost anything in exchange for a payment, without regard for quality or the bother of subjecting the work to real peer review, has been growing. In his post in the Science Blog, Beryl Lieff Benderly, looks at how a website called Think. Check. Submit. helps researchers avoid the clutches of these bogus so-called predatory publishers. The website seeks to educate the unwary about how to decide which journals to submit their work to.
The blog post says (quote): "The day-to-day experience of scientists is filled with devices aimed at managing overflow. Most obvious are the impact factors of journals, which are widely agreed to be unrepresentative[,] yet they are widely used as proxies for scientific importance in decisions about hiring and funding. Even more pernicious measures are becoming widespread, such as moves to assess the performance of academic staff by measuring the amount of grant money they bring in."............ (unquote)
The full entry can be read Here.
2. Making Open Access work: Clustering analysis of academic discourse suggests OA is still grappling with controversy.
Open Access Week starts October 19th. In the run-up, Stephen Pinfield, in his post in The Impact Blog, provides an overview of eighteen propositions on open access identified through an extensive analysis of the discourse. Key elements remain controversial. Particularly in relation to quality, researchers continue to view open access publishing with disinterest, suspicion and scepticism. It is clear that whilst OA has come a long way in the last five years, there is still a lot still to do in making open access work.
The blog post says (quote): I looked at the peer-reviewed literature on OA and also at relevant reports, press articles and 'informal' communication channels (blogs, email discussion lists etc) which together make up the OA discourse. I used a textual analysis tool (VOSviewer) to visualise important terms from 589 key articles on OA identified from peer-reviewed journals (a corpus of over 2.5 million words). I then used the output from VOSviewer to identify major themes which could then be explored further both in the peer-reviewed literature and other sources.............(unquote)
The full entry can be read Here.
3. HighWire's John Sack on Online Indexing of Scholarly Publications: Part 2, What Happens When Finding Everything is So Easy?
This is the second of two posts from HighWire Press's John Sack. In this post in the Scholarly Kitchen Blog, John looks at the changes that search engine indexing has driven for discovery of research publications. Part 2 of this two part series covers Anurag Acharya's recent ALPSP keynote address.
The blog post says (quote): What's happening here? An iterative-filtering workflow is now common: search - scan titles and snippet – click on a number of abstracts – click on a few full texts – change query – lather, rinse, repeat. I think of this as a kind of hunt-then-gather mode: you hunt, you gather up, you move on to another venue, you repeat. I imagine people are determining relevance via the abstract – which loads more quickly and never hits a paywall – then decide whether to store (a PDF) or read. Scholar has also found that abstracts that have full text links are more likely to be clicked on than those that do not have such links. Perhaps this is because the user is assured that full text is available if it is needed.............(unquote)
The full entry can be read Here.
4. What's so Wrong with the Impact Factor? Part 2
Does the Impact Factor (IF) really harm science? If so, is the IF the cause or just a symptom of a bigger problem? Principal among those is that IF is mean when it should be a median. In his post in the Perspectives, Phill Joneslooks more closely at the psychology of IF and how it alters authors' and readers' behavior, potentially for the worse.
The blog post says (quote): The IF was originally designed as a way to judge journals, not articles. Eugene Garfield, the scientometrician who came up with the measure, was simply trying to provide a metric to allow librarians to decide which subscription journals should be in their core collections. He never intended it to be used as a proxy measure for the quality of the articles in the journal.........(unquote)
The full entry can be read Here.
5. Do Academy Members Publish Better Papers?
Based on the National Academy of Sciences model of contributed submission, the journal, mBio, allows a different submission pathway for fellows of the American Academy of Microbiology (AAM), the honorific leadership group within the American Society for Microbiology (ASM). Fellows are permitted to select their own reviewers and submit their paper (along with its reviews) directly to the editor, notes Phil Davis in his post in the Scholarly Kitchen Blog.
The blog post says (quote): Is this VIP track for academy fellows beneficial to the performance of mBio? One could surmise that submissions from fellows of an elite academy of researchers would perform better than submissions from non-membered authors. On the other hand, one could equally argue that by allowing academy fellows to chose their own reviewers ensures a less-rigorous review process and allows sub-standard work to slip through to publication, a longtime criticism of PNAS's contributed track.............(unquote)
The full entry can be read Here.
Leave a Reply