1. The Next Decade of Data Science: Rethinking key challenges faced by big data researchers
The vast availability of digital traces of unprecedented form and scale has led many to believe that we are entering a new data revolution. Will these new data sources and tools allow us to improve business processes in transformative ways? In his post in The Impact Blog, Vyacheslav Polonski argues that the more data is available, the more theory is needed to know what to look for and how to interpret what we have found.
The blog post says (quote): Some data scientists dream of a time when massive datasets and distinctly accurate analytics will allow them to finally truly understand and predict human behaviours; a time when the emergent "data revolution" will come to its logical conclusion in providing scientist with a constant flow of real-time data on political, economic and social interaction, making theoretical assumptions about such phenomena obsolete. Over the course of the next few years of data research, there will be many temptations to give in to the wondrous promises of big data, but data scientists need to think one step ahead, delineate illusion from reality and stay true to the traditions of the scientific method...........(Unquote)
The full entry can be read Here.
2. Academic publishing is all about status
In the old days, journals were viewed as means for disseminating ideas. Pinelopi Goldberg, an economics professor at Yale and editor in chief of the American Economic Review, however, feels that does not apply any longer. So what are academic journals for? Goldberg, speaking during a panel discussion about the peer-review process at the big annual gathering of academic economists, offered this answer: "The most important function that journals have these days is the certification of quality", notes Justin Fox, in his post in The Salt Lake Tribune Blog.
The blog post says (quote): If academic publishing were all about dissemination of information, this resilience would be really hard to explain. Yes, in some fields journals are still the only way to get access to research, or articles become freely available only well after they've been published in journals. But in economics almost every paper of significance is now available in some form free on the Internet before it is published in a journal. Yet economics journals that keep their articles behind paywalls and charge hundreds or thousands of dollars a year for library subscriptions continue to thrive...........(Unquote)
The full entry can be read Here.
3. The Terrible Burden of a Prestigious Brand
While all publishers like to have a strong brand, some brands are so prestigious that they actually serve to paralyse the managements responsible for them, making it impossible to introduce innovations and to develop the business. Vast bureaucracies arrive whose purpose is not to develop the business but to protect the vaunted brand. This is a management problem, not a marketing one, but it can stymie a publisher from pursuing a progressive agenda, notes Joseph Esposito in his post in the Scholarly Kitchen Blog.
The blog post says (quote): For the stewards of some of the brands, however, the situation is not so straightforward. If your publication has a very high impact factor (and remember, that is how the prestigious journals evaluate themselves), any other publication you come up with is likely to have a lower impact factor. A businessperson might say, Well, that's line extension. But the publisher of a highly-ranked journal is as likely to say, That's brand dilution. All the benefits of line extension - greater revenue and operating income, an opportunity to monetize the formerly rejected articles, a squeezing-out of rivals' lesser brands by the sheer clout of the prestigious brand and its new family of publications - are as nothing in comparison to the perception that the brand is not quite what it used to be. It formerly stood for the very finest; it was a reputation based on exclusion. Now it is the august patriarch of an extended family, but none of the scions have the gravitas of grandpa...........(unquote)
The full entry can be read Here.
4. Open journals that piggyback on arXiv gather momentum
Elizabeth Gibney, in her post in the Nature Blog, looks at how peer-review platforms built around online pre-print repositories are spreading to astrophysics. An astrophysicist recently launched a low-cost community peer-review platform that circumvents traditional scientific publishing - and by making its software open-source, he is encouraging scientists in other fields to do the same. The Open Journal of Astrophysics works in tandem with manuscripts posted on the pre-print server arXiv. Researchers submit their papers from arXiv directly to the journal, which evaluates them by conventional peer review. Accepted versions of the papers are then re-posted to arXiv and assigned a DOI, and the journal publishes links to them.
The blog post says (quote): GitHub is covering the costs of hosting the platform, so the only remaining expense is editors' and reviewers' time, which they give up voluntarily, says Coles. If the experiment proves successful and the volume of papers balloons, the journal may eventually have to charge authors a handling fee of a few tens of pounds, he adds. (The journal also relies on the continued existence of arXiv, whose running costs amount to less than $10 per paper). The journal does not have the resources to offer services provided by conventional journals, such as heavy editing of papers. Instead, poorly written articles will be rejected and the authors referred to a list of professional copy-editing services...........(Unquote)
The full entry can be read Here.
5. Improving Privacy by Rethinking Architecture
Today, we grapple with privacy issues as consumers, as citizens, and as voters. As an industry, we should be thinking about how to draw not only on policy but also on technical architecture to balance privacy and innovation. When the stars align, an entirely different architecture for the control of user data is possible. Roger C. Schonfeld, in his post in the Scholarly Kitchen Blog, looks at what would such a shift mean for scholarly publishing and academic libraries.
The blog post says (quote): By having comparatively little user activity data on their own hands, and fragmenting those data they do keep, libraries may feel they have sidestepped an ethical dilemma. But they surely have missed the opportunity to build the personalized discovery and research information management tools that scientists and other academics need. In doing so, libraries have underserved researchers and other users, putting themselves at a competitive disadvantage relative to other providers. But at the same time, they have not placed limits on the data gathering and usage of content providers and other vendors nearly as strictly as they often claim for themselves...........(Unquote)
The full entry can be read Here.
Leave a Reply