1. A Taxonomy of University Presses Today
University presses bring a diversity not only of costs, scale, and business models, but also of organisational capacity, incentives, and objectives. As efforts are mounted to transition monograph publishing to open access, it is vital that we recognise the richness and complexity of this community, notes Roger C. Schonfeld, in his post in the Scholarly Kitchen Blog.
The blog post says (quote): A number of presses are being merged into their academic libraries or now report to the library. In some cases, such as Michigan and Temple, this has resulted in growing incorporation into, and contribution to, the mission and work of the academic library. These integrated presses can pursue major initiatives to disrupt scholarly publishing, but they vary substantially in scale and potential for impact. In some cases, hidden presses have been merged into the library less for strategic reasons than to provide a graceful budgetary mechanism to cover, and to some degree mask, their cost to the university. There are other cases where presses report to the university library director but continue to operate more or less independently as businesses.…………… (unquote)
The full entry can be read Here.
2. Guest Post, Adam Hodgkin: Do Books Need More Aggregation or More Curation - Time to Uncircle the Wagons?
Academic and scholarly publishers have gravitated towards an aggregation model of distribution. Journal publishers are in the midst of a period of consolidation, as mergers and acquisitions result in the large publishers combining and growing ever larger. Posted by Adam Hodgkin in his guest post in the Scholalrly Kitchen Blog, this post looks at the differences between the academic books and journals markets, and how the aggregation strategies for journals may not work in the same manner for books.
The blog post says (quote): The publication of academic, scholarly, social scientific books is not so easily amenable to consolidation and aggregation model. Even if we ignore the rather special requirements and market for academic and college textbooks, the books market as it reaches into university and college libraries is a very heterogenous affair. Scholarly monographs are only a small part of the picture. There are many more publishers involved, many more disciplines and sub-disciplines, and a large but very interesting fringe of books which are by no means exclusively for scholars and students. University libraries need digital versions of many of the ostensibly ‘trade’ books, or special interest books, that may have been primarily written for a popular cultural market. The high granularity of the books market has not stopped large academic book publishers, or consortia of them, from developing their own ‘book platforms’ and to an extent piggy-backing them with an existing periodicals service: Oxford, Cambridge, Wiley (again), Project Muse and JSTOR have launched aggregation solutions for the books market.…………… (unquote)
The full entry can be read Here.
3. Publishing and sharing data papers can increase impact and benefits researchers, publishers, funders and libraries
The process of compiling and submitting data papers to journals has long been a frustrating one to the minority of researchers that have tried. In her post in The Impact Blog, Fiona Murphy, part of a project team working to automate this process, outlines why publishing data papers is important and how open data can be of benefit to all stakeholders across scholarly communications and higher education.
The blog post says (quote): What’s the wider context for publishing data papers? Those who have been keeping an eye on this topic will be well aware that the debate as to whether the ‘data paper’ and ‘data journal’ are more than a transitional or transient scholarly communication format and medium is still ongoing. And currently very few researchers are publishing their data - it simply hasn’t been integrated into their training, workflows or incentive schemes. Funders, publishers and other organisations such as DataCite have been working hard to raise awareness of the benefits in general terms to ‘science’, but it’s been difficult to make the case to the individual for taking the time to pull together a data paper.…………… (unquote)
The full entry can be read Here.
4. Digging into the (feedback) data
At Altmetric, they speak to a lot of publishers, and whilst they each have their own particular use case, there are two questions that often come up: What do authors think of the data Altmetric provides? How can we help get more attention for our content? Cat Williams, in her post in the Altmetric Blog, takes a look at a few recent surveys, studies and examples that go some way to addressing these questions.
The blog post says (quote): A good place to start is by looking at the Altmetric data for other articles published in peer or competitor journals: where are they getting attention? Are they engaging the audiences you want to reach? You can do this by clicking on the donut badges on many publisher article pages. Take a look at the details pages of individual articles, datasets, books or chapters to determine if there are key influencers (perhaps bloggers, or key opinion leaders on Twitter) who you might be able to reach out to, with the aim of a) making them aware of your content and b) getting them to help raise its visibility. Social media doesn’t need to be time-consuming: get yourself set up with a free tool like hootsuite, and schedule some tweets to go out through the week.…………… (unquote)
The full entry can be read Here.
5. What can dynamic data visualisation do for us?
The fourth paradigm of science brings with it an onslaught of data. Quantitative, qualitative, direct and anecdotal, it’s an often-acknowledged fact that the ability to collect and share vast quantities of data is the greatest change in scientific research of our times. With this new opportunity comes inherent challenges in the comprehension of data, notes Alex Oxborough, in his post in the Semantico Blog.
The blog post says (quote): By simply concluding, “Data Visualisations are brilliant! We should have more!”, we would be missing the real untapped potential of dynamic data visualisation. As it stands, visualisations are taken from cleaned – and therefore closed off – datasets. Imagine then if visualisations could be made from the vast raw datasets languishing in data dumps. If, instead of neatly fencing off data as an analysis reporting tool, it was scraped from raw datasets as an integral part of the research process. Indeed, in an ideal world, if these datasets could be stitched. This would free the data from the bounds of perspective and ideology.…………… (unquote)
The full entry can be read Here.
Leave a Reply