Evaluating the Peer Review Process of Grant Applications
17 October, 2018 | Imelda Bates |
|
|
Guiding good practise to promote coherence and transparency among grant-makers to make the award-making process more effective

Imelda Bates, Liverpool School of Tropical Medicine, talks about her research on peer review and ways of guiding good practise for panels. In this blog, she discusses the benefits and challenges research consortia present.
Research consortia are one of the most popular ways of funding collaborative multi-national research as they are thought to provide additional benefits such as greater generalisability of findings and a more comprehensive understanding of the research issues.
In addition, multi-disciplinary consortia can create synergies that make them more influential to bring about programmatic and policy change, and they can help to address inequality of resources and research opportunities amongst partners.
However, the diversity and complexity of multi-national and multi-partner consortia can present challenges and risks, including the potential to increase the inequity between partners; be costlier in terms of management compared to less complex models; and they can cause a lack of cohesion around common goals and expectations.
Guiding good practise
This programme was one of the first examples where grants were to be awarded based on the quality of science and on the quality of research capacity strengthening plans. As this dual objective was likely to increasingly be a requirement of some funders, it was important to explore how this additional complexity was managed by grant reviewers.
Much of the literature on peer review has focussed on review of articles for publication; peer review by panels evaluating research applications has received much less attention. We could find no publications about the peer review process for multi-national science consortia, which meant that there was a lack of evidence to guide good practice.
Given the amount of money invested globally in trans-national research consortia, there is a pressing need to understand how funding decisions are made, to develop an evidence base that can help promote coherence and transparency within and among grant-makers, and to ultimately make the process more effective.
The Capacity Research Unit
The Capacity Research Unit based at the Liverpool School of Tropical Medicine specialises in the science of research capacity strengthening, by generating robust evidence to guide the design of organizational capacity strengthening programmes.
We measure the effectiveness of programmes by developing rapid assessment tools to identify the strengths and needs of research programmes and institutions. In this study, our role was to undertake an evaluation of the peer review grant-making process used by an award-making institution.
In a separate but related study, we also conducted research into how the consortia themselves strengthened the research capacity of their partner institutions, and we looked at what learning could be used to improve the programme in real-time.
Lessons learnt
In general, the peer-review process we observed was rigorous and well-managed, and panel members were positive, particularly regarding the opportunity for collaboration and the panel’s diversity of expertise.
Compared to their experience on other review panels, some members noted that the task for this panel was particularly challenging. This was due to the need to cover three different natural science areas and many different countries, along with covering the dual aims of the grant which were science excellence and research capacity strengthening plans.
Improving the process
Our findings revealed that the constitution of a peer review panel should be diverse enough to reflect the scientific topics, context, gender and language of the applications, and should be maintained across different rounds of awards to ensure consistency.
The shortlisting and selection process, with involvement of external reviewers, should be conducted in line with funders’ benchmarks and should be applicable to these complex consortia applications to provide a rigorous and equitable selection mechanism.
In terms of the assessment criteria, in this case the weighting to be given to research capacity strengthening plans compared to “excellent science”, needs to be defined and the focussed assessment criteria should be on those most relevant to the programme’s aims to enhance consistency during the discussions.
Finally, the guidelines for reviewers should specify whether criteria such as gender and career stage must be considered in all cases, or whether they should only be used as ‘tie breakers’, with clarity on how much weight these categories carry.
Transparency
We chose to publish with F1000 because we like the ethos of transparency in reviewing and in publishing a sequence of versions. The associated metrics and blogs, low costs, as well as the speed of publication, were also attractive.
From our experience, we found the review process was fast as the F1000 team was quick, responsive and helpful; our engagement with the reviewers helped to improve the paper and our own knowledge on the topic.
|
User comments must be in English, comprehensible and relevant to the post under discussion. We reserve the right to remove any comments that we consider to be inappropriate, offensive or otherwise in breach of the User Comment Terms and Conditions. Commenters must not use a comment for personal attacks.
Click here to post comment and indicate that you accept the Commenting Terms and Conditions.