Science and Research Content

Blogs selected for Week April 11 to April 17, 2016 -



1. Fundable, but not funded: How can research funders ensure ‘unlucky’ applications are handled more appropriately?

Having a funding application rejected does not necessarily mean the research is unsupportable by funders - maybe just unlucky. There is a significant risk to wider society in the rejection of unlucky but otherwise sound applications: good ideas may slip through the cracks, or be re-worked and dulled-down to sound more likely to provide reliable results. In his post in The Impact Blog, Oli Preston looks at how funders could be better at addressing the burden of more high-quality applications than are financially manageable.

The blog post says (quote): Unfortunately, few funders have the capacity to fund every high-quality application they receive. Therefore, the process focuses on whittling down applications to a number within the funder limitations. It is this group of applicants whose proposals are deemed supportable science, but who cannot be funded under financial limitations, who deserve attention. This group of applications could be more accurately described as ‘unlucky’. Applications that do not get selected may be rejected or told to reapply under a different guise. Many do not get feedback at all. But, if the selection process cannot predict which supportable applications would have been successful, why should these people face rejection?.........(unquote)

The full entry can be read Here.

2. On Moose and Medians (Or Why We Are Stuck With The Impact Factor)

If Thomson Reuters can calculate Impact Factors and Eigenfactors, why can’t they deliver a simple median score? In his post in the Scholarly Kitchen Blog, Phil Davis describes why we are stuck with the Impact Factor - as it is currently calculated - for the foreseeable future.

The blog post says (quote): The counting method used in the JCR is much less strenuous than the Web of Science, and relies just on the name of the journal (and its variants) and the year of citation. The JCR doesn’t attempt to match a specific source document with a specific target document, like in the Web of Science. It just adds up all of the times a journal receives citations in a given year. So, what does this have anything to do with medians? In the process of counting the total number of citations to a journal, the JCR loses all of the information that would allow them to calculate a median. While you can calculate an average by just knowing two numbers - total citations on the top, total citable items on the bottom - calculating a median requires you to know the performance of each paper...........(unquote)

The full entry can be read Here.

3. Data Citation Standards: Progress, But Slow Progress

Making the data behind research papers publicly available remains something of a new frontier, both for publishers and for authors. As the research culture shifts more toward transparency, and as more journals and funding bodies require release of data, it is vital that the data be discoverable, to facilitate reuse, and citable, to provide credit where it is due. In his post in the Scholarly Kitchen Blog, David Crotty discusses a recent study looking at data citation practices from 2011 to 2014.

The blog post says (quote): Several of the journals I work with are just implementing data policies and means of making data available, and to be honest, data citation was not something that was on the radar of their editorial offices - writing clear policies and instructions to authors, arranging for partnerships with data repositories and then working through the technologies required to make this happen took priority. But having been involved with a recent Alan Turing Institute Symposium on Reproducibility for Data Intensive Research, the need for best citation practices was made evidently clear. We’ve now implemented a policy to meet best practices, and authors will be required to cite data as they would any other citation, and particularly to include it in their article’s References............(unquote)

The full entry can be read Here.

4. What Can Be Done to Better Manage Big Data in Healthcare?

The UK healthcare sector, and particularly the NHS, has changed and reformed more in recent years than ever before. In the face of significantly reduced budgets and huge demands to reduce costs and increase efficiency, this has prompted radical cuts and the introduction of new technologies in order to reshape the industry from top to bottom. In fact, technology is seen as an enabler of change and is being adopted in a wide range of areas, notes Nik Stanbridge, in his guest post in the Digital Science Blog.

The blog post says (quote): The good news is that there are specialist, managed data storage services which are positively disrupting how Big Data is being preserved by reducing costs, meeting NHS and healthcare compliance requirements and delivering the long-term efficiency benefits that enable digital workflows to flourish. Therefore, IM&T Managers should consider bringing in one of these specialist providers of long-term data archiving, such as Arkivum, who can implement a managed service that has been specifically designed from the ground up to provide ultra-secure storage for large volumes of data for extended periods of time...........(unquote)

The full entry can be read Here.

5. Put Your Researchers on the Path to Independence

The metric of a librarian's success is in many ways tied to their users' success. If researchers can discover relevant information, evaluate it, and then access it, reporting will show not only higher usage, but higher quality usage-allowing librarians to breathe easier, discusses a post in the Library Journal Blog.

The blog post says (quote): Studies show that one's emotional reaction to a simple and intuitive interface improves research results, says Sayar. That’s where intense user testing came in handy as Ebook Central’s interface was being designed. The testing helped determine when researchers use key features, like search and save, and how they use titles. "And then we spent much time user-testing what researchers needed-tracking their navigation path, to try and limit the number of clicks," says Sayar. The goal is to try to allow researchers to surface as much relevant information as is appropriate without them having to do too much work initially. Then we want to empower them so that they can take the actions to refine their searches..........(unquote)

The full entry can be read Here.

6. Publish or Perish? Academics in European Universities

Research in higher education has consistently shown that some academics publish a lot – and others publish at moderate rates, or not at all. It has always been so. But institutional reward and promotion structures have always been focused on research achievements, that is, on publications. And academic prestige has always come almost exclusively from research. In the Inside Higher ED post, Marek Kwiek focuses on highly productive academics across 11 European systems.

The blog post says (quote): The European research elite is a highly homogeneous group of academics whose research performance is driven by structurally similar factors. The variables increasing the odds of entering this class are individual rather than institutional. From whichever institutional and national contexts they come, they work according to similar working patterns and they share similar academic attitudes. Highly productive academics are similar from a European cross-national perspective - and they substantially differ intra-nationally from their lower-performing colleagues............(unquote)

The full entry can be read Here.

Leave a Reply

Your email address will not be published. Required fields are marked *


sponsor links

For banner ads click here