A team of scientists led by a Michigan State University astronomer has found that a new process of evaluating proposed scientific research projects is as effective – if not more so – than the traditional peer-review method. Normally, when a researcher submits a proposal, the funding agency then asks a number of researchers in that particular field to evaluate and make funding recommendations. A system that can sometimes be a bit bulky and slow – not quite an exact science. However, the team enhanced it by using two other novel features: Using machine learning to match reviewers with proposals and the inclusion of a feedback mechanism on the review. First, when a scientist submits a proposal for evaluation, he or she is first asked to review several of their competitors’ papers, a way of lessening the amount of papers one is asked to review. Second, by using computers – machine learning – funding agencies can match up the reviewer with proposals of fields in which they are experts. And third, the team introduced a feedback system in which the person who submitted the proposal can judge if the feedback they received was helpful. Ultimately, this might help the community reward scientists that consistently provide constructive criticism.
Category: ArticlesClick here