Science and Research Content

'Insights 2024: Attitudes toward AI' report reveals researchers and clinicians believe in AI's potential but demand transparency -

A recent study by Elsevier, a global leader in scientific information and data analytics, highlights a growing interest in the use of artificial intelligence (AI) among researchers and clinicians. The "Insights 2024: Attitudes toward AI" report, based on a survey of 3,000 researchers and clinicians across 123 countries, reveals that while there is a strong belief in AI's potential to transform research and healthcare, significant concerns about transparency and trust must be addressed for widespread adoption.

The report indicates that AI is seen as a powerful tool for accelerating knowledge discovery, increasing work quality, and reducing costs. Specifically, 94% of researchers and 96% of clinicians believe AI will help accelerate knowledge discovery, while 92% of researchers and 96% of clinicians think it will rapidly increase the volume of scholarly and medical research. Additionally, 92% foresee cost savings for institutions and businesses, 87% believe it will enhance work quality, and 85% expect AI to free up time for higher-value projects.

Despite this optimism, there are substantial concerns about the risks associated with AI. The majority of respondents fear that AI could contribute to misinformation and critical errors. Specifically, 95% of researchers and 93% of clinicians believe AI will be used for misinformation, and 86% of researchers and 85% of clinicians are concerned about AI causing critical errors. Furthermore, 81% of researchers and 82% of clinicians worry that AI might erode critical thinking, leading to over-reliance on AI for clinical decisions. Approximately 79% of clinicians and 80% of researchers believe AI could disrupt society.

Transparency and trust in AI tools are essential for their integration into daily work. The report shows that if AI tools are backed by trusted content, quality controls, and responsible AI principles, 89% of researchers and 94% of clinicians would use AI for specific tasks like generating a synthesis of articles or assessing symptoms and identifying conditions. Moreover, 81% of researchers and clinicians expect to be informed about the use of generative AI in their tools, 71% demand that AI tools' results be based on high-quality, trusted sources, and 78% of researchers and 80% of clinicians expect transparency in peer-review recommendations involving AI.

The study also reveals varying attitudes toward AI among researchers and clinicians in the US, China, and India. While over half (54%) of those familiar with AI have used it, only 31% have done so for work-related purposes. This usage rate is higher in China (39%) and lower in India (22%). Notably, only 11% of respondents consider themselves very familiar with AI or use it frequently. However, 67% of those who have not yet used AI expect to do so within two to five years, with higher expectations in China (83%) and India (79%) compared to the US (53%). Additionally, US respondents are less optimistic about the future impact of AI on their work (28%) compared to their counterparts in China (46%) and India (41%).

There is alignment, though with some variations, in how likely researchers and clinicians in these countries are to use AI tools for reviewing studies, identifying knowledge gaps, and generating new research hypotheses. Respondents in India showed the highest likelihood (100%), followed by China (96%) and the US (84%).

For more than a decade, Elsevier has integrated AI and machine learning with peer-reviewed content, extensive data sets, and expert oversight to develop products that enhance the effectiveness of the research, life sciences, and healthcare communities. The company adheres to Responsible AI Principles and Privacy Principles, ensuring their solutions align with the goals of these communities.

Click here to read the original press release.

sponsor links

For banner adsĀ click here