Responses
Other responses
Jump to comment:
- Published on: 13 May 2025
- Published on: 13 May 2025Regulated vs. Non-Regulated AI in UK Medicine: Commentary on Warrington and Holm (2024) and the Role of LLM Risk Assessment
Comments related to the findings reported by Warrington and Holm regarding the use of Artificial Intelligence (AI) by UK General Medical Council (GMC) registered doctors (Warrington DJ, Holm S. BMJ Open 2024;14:e089090. doi:10.1136/bmjopen-2024-089090).
Show More
A key observation regarding the study is its apparent lack of distinction between participants' use of regulated AI products (classified as medical devices) and non-regulated AI tools (such as general-purpose LLMs). The wide range of respondent specialties reported further highlights this potential issue; for instance, clinicians in radiology or pathology are more likely to encounter regulated, task-specific AI, whereas those in public health or psychiatry might be more likely to experiment with non-regulated, general-purpose models.
During the review process, I used Gemini Advanced (specifically, the model designated by the user as 2.5 Pro Experimental) to assist with processing screenshots of table1, table 2 and fig 1 into spreadsheet processable data. The same Large Language Model (LLM) was also prompted to categorize the clinical risks associated with the AI uses listed in Figure 1 of the original paper. The author is of the opinion that the LLM’s risk categorisation (column 2 in table A below) adopted a patient-centric perspective.
However, the author is of the opinion that a "composite" clinical risk assessment, which considers both the nature of the specific usage instance and the pot...Conflict of Interest:
None declared.