1. Journals give more publicity to 'weak science'
Scientists often bemoan journalists' shoddy reporting of research findings. The writer and physician Ben Goldacre has even made a career dissecting shaky scientific claims that appear in British newspapers. But a new study suggests that scientifically illiterate hacks in desperate need of a story might be only partly to blame. David Matthews, in his post in the Times Higher Education Blog, discusses the analysis of seven prominent medical journals which finds that the randomised controlled trials are far less likely to receive a press release than weaker observational studies.
The blog post says (quote): There was a similar pattern when looking at the most reliable type of research: RCTs with large numbers of participants. These were given a press release just 14 per cent of the time, compared with 38 per cent of those with smaller samples and observational trials. It even appears that journalists evened up this discrepancy: agencies and newspapers reported on such "strong" and "weak" research equally, despite the journals giving more publicity to the latter. "RCTs represent a higher level of evidence than observational studies. Consequently, it might be expected that academic commentary and media coverage would occur more frequently for randomised research than observational research," concluded "Media Coverage, Journal Press Releases and Editorials Associated with Randomized and Observational Studies in High-Impact Medical Journals: A Cohort Study"...........(unquote)
The full entry can be read Here.
2. Open Access - still to have its Google moment
Long before 'to google' became a verb, the indexing and search of web content was somewhat random - you had to know the exact wording of a website's title to find what you were looking for. One of the first search engines, Archie (1990) was basically a directory - a curated FTP site hosting an index of downloadable directory listings. In his post in the Publishing Technology Blog, Byron Russell discusses the discoverability of Open Access content through Google search.
The blog post says (quote): If a researcher is specifically looking for Open Access content, as will increasingly be the case, they can of course go to a directory (Archie again!) such as DOAJ, but that is far from exhaustive and is not even fully searchable - it lists over 10,700 journal records, but only 6,800 are searchable at article level. Even the broader classification of Open Access is unclear. The Creative Commons definition of Open Access in the sense of "non-commercial" is fuzzy – the website defines it as: "[not] primarily intended for or directed toward commercial advantage or private monetary compensation". The 2009 Creative Commons report on defining non-commercial alone runs to 119 pages. Is an article published under a CC-BY-NC-ND license as "open" as one published under a CC-BY-NC-SA licence? And how is any search engine to identify, differentiate and flag between these various licences?...........(unquote)
The full entry can be read Here.
3. When will content truly become mobile?
After 7+ years of working remotely from my home office I recently started a new job with a daily commute. It’s actually quite an enjoyable ride and I originally planned to make it even better with a variety of mobile/audio content. Podcasts were at the top of my list but I also figured I could finally dive into audio books and a variety of text-to-speech solutions, notes Joe Wikert, in his post in the Digital Content Strategies Blog.
The blog post says (quote): Audio books probably aren’t the right solution for me after all though. I'm still reeling from sticker shock after surveying the audio book landscape. You'd have to be pretty committed to the book and format to pay more for the audio edition than you’d pay for the print edition. I thought the unlimited monthly subscription platforms might be an alternative but they have too many restrictions. Scribd is a great example. I'm limited to one audio book per month so it's really unlimited for ebooks but very limited for audio...........(unquote)
The full entry can be read Here.
4. The relationship between journal rejections and their impact factors
Frontiers recently published a fascinating article about the relationship between the impact factors (IF) and rejection rates from a range of journals. It was a neat little study designed around the perception that many publishers have that in order to generate high citation counts for their journals, they must be highly selective and only publish the 'highest quality' work. In his post in the ScienceOpen Blog, Jon Tennant discusses the study published by Frontiers.
The blog post says (quote): As the Frontiers article points out, this data is good evidence against the notion that to obtain a high IF, your journal must be highly selective and reject a lot of research. This is actually really important for both publishers and researchers, as it tells us that the amount of time and money which is wasted chasing higher IFs only serves to increase rejection rates, and not the impact factor of journals. Furthermore, it shows that if we assume IFs measure some aspect of journal or article quality, as many do, then this has very little to do with selectivity of journals based on a priori assessment. The IF originates from the subscription-based era of publishing and was originally intended to help librarians to select journals worth purchasing. It neither reflects the actual number of citations for a given article nor its scientific quality...........(unquote)
The full entry can be read Here.
5. There is very profitable revenue that the organizational structure of big publishers makes it hard for them to get
Mike Shatzkin, in his post in The Idea Logical Company Blog, discusses about three specific cross-functional opportunities that exist in every publishing house that are especially difficult for the biggest ones to address internally. All three of these can unlock substantial revenue and save the house from going down costly rabbit holes trying to address pain points that are clearly felt but not so clearly understood.
The blog post says (quote): All three of these opportunities are very difficult for anybody in-house to analyze and referee, even if there is high-level recognition of the opportunity, good systems-development capability (because the existing systems will not be adequate), and the will on everybody's part to cooperate. The fact that they are cross-functional means there is no natural "home" for ownership of the solution in any house (even though the author and global opportunities would appear to have nominal owners - the editors and the account managers - in the current configuration)...........(unquote)
The full entry can be read Here.
Leave a Reply