A new report by the Higher Education Policy Institute (HEPI) and Taylor & Francis examines how artificial intelligence could strengthen translational research by accelerating the movement of scientific discoveries from research environments into real-world application.
Titled ‘Using Artificial Intelligence (AI) to Advance Translational Research (HEPI Policy Note 67), the report evaluates how AI tools may support the UK’s translational research system through faster data analysis, improved interdisciplinary collaboration, and enhanced accessibility of research outputs. The analysis draws on discussions from a roundtable involving higher education leaders, researchers, AI innovators, and research funders, alongside a range of research case studies.
The report identifies AI as a potential enabler of more efficient translational research, particularly through the analysis of large and complex datasets, improved knowledge synthesis, and stronger connections across research disciplines. At the same time, uneven availability and quality of data across fields continue to limit the effectiveness of AI tools in supporting research translation.
Access to AI skills and expertise is highlighted as an increasingly critical factor, with the report emphasizing the importance of embedding such expertise within interdisciplinary research frameworks. Improving access to AI capabilities is presented as a necessary condition for enabling research teams to move discoveries more effectively toward real-world outcomes.
The report also notes that AI could enhance the visibility and accessibility of research through tools such as plain-language summaries, semantic search technologies, and alternative content formats designed for audiences beyond academia. These approaches are positioned as ways to support broader engagement with research findings while maintaining scholarly rigor.
Alongside these opportunities, the report outlines several risks associated with AI use in research, including challenges related to reproducibility, bias, deskilling, academic integrity, intellectual property, and accountability. Addressing these risks is described as essential to ensuring that AI strengthens, rather than undermines, research quality and trust.
To support responsible adoption, the report sets out recommendations for research funders, institutions, and scholarly publishers. These include establishing clear expectations for responsible AI use aligned with existing integrity guidance, investing in transparent and ethical AI systems, strengthening support for interdisciplinary research models, expanding shared and open AI research infrastructure, and encouraging secure data sharing and reuse.
The report concludes that sustained investment in interdisciplinary expertise, ethical governance, and infrastructure will be necessary for AI to deliver meaningful societal benefits by accelerating the translation of research into practice.
Click here to read the original press release.