Article Text

Original research
Experiences of using artificial intelligence in healthcare: a qualitative study of UK clinician and key stakeholder perspectives
  1. C A Fazakarley1,
  2. Maria Breen2,3,
  3. Paul Leeson4,
  4. Ben Thompson1,
  5. Victoria Williamson5,6
  1. 1Ultromics Ltd, Oxford, UK
  2. 2School of Psychology & Clinical Language Sciences, University of Reading, Reading, UK
  3. 3Breen Clinical Research, London, UK
  4. 4Division of Cardiovascular Medicine, University of Oxford, Oxford, UK
  5. 5King's College London, London, UK
  6. 6Experimental Psychology, University of Oxford, Oxford, UK
  1. Correspondence to Dr Victoria Williamson; victoria.williamson{at}kcl.ac.uk

Abstract

Objectives Artificial intelligence (AI) is a rapidly developing field in healthcare, with tools being developed across various specialties to support healthcare professionals and reduce workloads. It is important to understand the experiences of professionals working in healthcare to ensure that future AI tools are acceptable and effectively implemented. The aim of this study was to gain an in-depth understanding of the experiences and perceptions of UK healthcare workers and other key stakeholders about the use of AI in the National Health Service (NHS).

Design A qualitative study using semistructured interviews conducted remotely via MS Teams. Thematic analysis was carried out.

Setting NHS and UK higher education institutes.

Participants Thirteen participants were recruited, including clinical and non-clinical participants working for the NHS and researchers working to develop AI tools for healthcare settings.

Results Four core themes were identified: positive perceptions of AI; potential barriers to using AI in healthcare; concerns regarding AI use and steps needed to ensure the acceptability of future AI tools. Overall, we found that those working in healthcare were generally open to the use of AI and expected it to have many benefits for patients and facilitate access to care. However, concerns were raised regarding the security of patient data, the potential for misdiagnosis and that AI could increase the burden on already strained healthcare staff.

Conclusion This study found that healthcare staff are willing to engage with AI research and incorporate AI tools into care pathways. Going forward, the NHS and AI developers will need to collaborate closely to ensure that future tools are suitable for their intended use and do not negatively impact workloads or patient trust. Future AI studies should continue to incorporate the views of key stakeholders to improve tool acceptability.

Trial registration number NCT05028179; ISRCTN15113915; IRAS ref: 293515.

  • clinical decision-making
  • qualitative research
  • quality in health care

Data availability statement

Data are available upon reasonable request. Data are available from corresponding author on reasonable request.

http://creativecommons.org/licenses/by-nc/4.0/

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Strengths and limitations of this study

  • Using qualitative interviews allowed for an in-depth understanding of a range of participants’ experiences, perceptions and concerns of using artificial intelligence tools in healthcare settings.

  • Interviews were conducted with a diverse range of participants working across the UK in both clinical and non-clinical settings.

  • One limitation of this research was that the somewhat limited diversity of the sample as most participants were men and White British. It will be important to expand on this in future research.

Introduction

Artificial intelligence (AI) is a rapidly evolving field within the healthcare sector. At this time, AI tools are being developed across specialities with the broad aim of assisting healthcare professionals deliver safe and effective care.1 As healthcare workloads continue to increase, it is hoped that AI tools will be able to relieve some of the strain experienced by clinicians, nurses and other allied health professionals.1 2 There is growing evidence to suggest that AI tools have a variety of applications, such as assisting diagnoses3 4 and patient management5 with many studies reporting improved outcomes for patients and improved efficiency in healthcare workflows.2

As the amount of AI used in healthcare continues to increase, there is a need to understand how these tools are impacting those who interact with them, both within a patient care and research setting, to improve the development and implementation of this technology. There is a developing area of research exploring the experiences of stakeholders across the healthcare sector, with some studies reporting that healthcare professionals are open to the use of AI tools in patient care and expect that they will relieve workloads, improve efficiency and overall patient experience.6 7 Previous studies have also documented concerns about AI implementation relating to data protection, lack of empathic care and the reducing job numbers in certain specialities such as radiology.8 9 However, much of this research is quantitative in nature, utilising methods such as surveys with predetermined responses, which are unable to provide the in-depth understanding needed regarding participants’ feelings, beliefs and perceptions of AI tools in healthcare. Therefore, it is important to conduct qualitative research in this area to allow participants to express their thoughts in their own words, providing additional detail and nuance that would otherwise be missed by quantitative methods.10 11

One specific healthcare context in which to explore perceptions of AI tools further is within the National Health Service (NHS) in the UK. This is a healthcare service that is of particular interest in the context of AI, given that it is facing growing pressures such as increased workloads with reduced staffing levels,12 and recent studies have begun to explore the potential use of AI tools in the use of patient care within the NHS.13 14 There have been some initial quantitative and qualitative research exploring the perceptions of AI by NHS staff and patients.15 16 However, there have been few studies that have comprehensively explored the perceptions of AI across multiple healthcare stakeholder groups, meaning there is limited data exploring the beliefs, expectations, and experience of these diverse groups.

A study currently being conducted within the NHS at this time is PROTEUS, a prospective randomised controlled trial evaluating the use of AI in stress echocardiography.17 The aim of this research is to investigate the use of ‘EchoGo Pro’, an AI tool designed to assist in the diagnosis of coronary artery disease (CAD). PROTEUS involves 20 NHS sites across England, recruiting thousands of patients and involving professionals across the healthcare sector to implement this tool. As part of this trial, we aimed to qualitatively explore the experiences of the healthcare professionals and other key stakeholders to gain a detailed understanding of how trials of AI tools in the NHS are experienced, their perceptions of implementing AI tools in NHS healthcare settings and how AI tools could impact those involved. We sought to understand the potential barriers preventing the adoption of AI tools into NHS settings, and the facilitators that make these tools attractive, well integrated and as effective as possible in these contexts.

Methods

Recruitment

This qualitative study is nested in a larger study examining the effectiveness of an AI tool—the EchoGo Pro—which is a tool used to improve the accuracy in diagnosing cardiovascular health problems.17 The processes and procedures of this investigation are detailed in Woodward et al.17

Between December 2021 and September 2022, 13 participants were recruited to this qualitative substudy. This substudy was experiential in focus, and because previous research has shown that the experiences, beliefs and perceptions of using AI tools in healthcare are under-researched (Fazakarley, et al, under review), we prioritised sample specificity when deciding the ‘informational power’18 of our sampling strategy. We aimed to incorporate in-depth insights from a specific sample of healthcare staff and other key stakeholders who had experience in their role of working with AI tools in NHS healthcare settings to address our research aim.

Participants were initially recruited from the NHS Trust sites involved in the PROTEUS study.17 Each NHS site provided the contact details of at least one clinical and one non-clinical member of staff involved in the EchoGo Pro research, and sites were also asked to circulate study information to staff and invite any interested individuals to contact the study researcher (CAF) to participate. Due to the low response rate and limited capacity of the healthcare staff to be available for interviews during the COVID-19 pandemic, our recruitment approach was expanded to also include professionals who were not directly involved in the PROTEUS study in the NHS but had experience of using AI in a healthcare context. Participants were recruited by sharing study information via research team mailing lists, contacting leading UK researchers who published healthcare AI studies in academic journals (identified via a scoping review of the literature) (Fazakarley et al, under review), and via a snowballing method where all participants were asked to share the study with other colleagues who may be interested in participating.

Eligible participants had to be aged 18 years or more, based in the UK, English-speaking and willing to provide informed consent. Participants were eligible to participate if they had experience of using AI tools in a healthcare setting in a clinical role (eg, doctor, nurse) or experience in evaluating, setting up or delivering an AI tool in a healthcare context (eg, information technology (IT) expert, researcher). No limitation on eligibility was imposed according to demographic characteristics (eg, gender, age, etc), professional grade or qualification (eg, PhD, consultant, etc). The aim of this inclusive strategy was to ensure we collected rich data from a range of participants with diverse knowledge and experience of AI in NHS healthcare settings. We use the term ‘professionals’ throughout to refer to both clinical and non-clinical participants for clarity. Individuals were screened for eligibility in line with study inclusion/exclusion criteria by a study researcher (CAF) prior to participation. Verbal informed consent for participation in the interviews was taken from all participants and audio-recorded.

In total, 42 individuals were invited to participate, each of them was contacted three times to arrange a time to conduct the interview. If no response was received after the third attempt, the individual was assumed to be no longer willing to participate and the research team destroyed their contact details (n=21). Eight participants declined to participate due to lack of time/capacity to be interviewed. No participants withdrew during or after providing informed consent. Recruitment was stopped after regular reviews of the collected data determined that thematic saturation had been reached.19

Assessment

Interviews were carried out by CAF (a female researcher with training in qualitative methods), one-to-one by MS Teams. Interviews lasted on average 30 min (range 19–39 min). Prior to data collection, the interview schedule was piloted with a clinical and non-clinical professional to ensure that the interview questions were sensitive and appropriate. The pilot interviews were not audio-recorded, and the data were not included in the analysis. The interview schedule (online supplemental material 1) focused on professional’s experience of using AI in the NHS, their perceptions of barriers and facilitators to adopting AI in NHS healthcare settings; views about barriers and facilitators to conducting AI research in NHS healthcare settings; beliefs about the potential impact of using AI tools on patient care, clinician workloads and the NHS more broadly and perceptions about the possible benefits and risks of using AI in patient care. Interview questions were open ended and encouraged participants to describe their views and lived experiences in their own words. Interviews were transcribed verbatim by CAF with any personally identifying information removed on transcription. Audio-recordings were destroyed following transcription. Participants were also asked to provide basic demographic information.

Analysis

We used Taguette (https://www.taguette.org/) to facilitate data analysis. Data were analysed by CAF and VW using thematic analysis.11 An inductive analytic approach was used, and the steps recommended by Braun & Clarke11 were followed. Namely, the transcripts were read and reread several times to foster familiarity with the data set and initial codes were then generated. We subsequently searched for and generated early themes which were revisited and revised to create core themes. Data collection and analysis took place simultaneously, which allowed developing topics of interest to be investigated in later interviews and to ascertain whether thematic saturation had occurred. Constant comparison was used while creating codes and themes, with each new transcript compared with the existing data set to identify unexpected themes. Regular peer debriefing meetings were held, where early codes and themes were reviewed and discussed and further refined where needed.20 A reflexive journal was kept by both researchers (CAF and VW) to ensure reflexivity by noting the influence of their own existing beliefs and experiences to prevent premature or biased interpretation of the data.

Patient and public involvement

Input from a dedicated patient and public involvement (PPI) stakeholder group was included in this qualitative substudy. PPI members included patients with lived experience of CAD and retired researchers in the field of healthcare. The PPI group provided feedback and guidance on the development of study materials (eg, information sheets, consent forms, interview schedules) as well as their reflections on findings from preliminary qualitative data analysis.

Results

Thirteen participants were recruited to this qualitative study. The mean age of participants was 38 years (SD=9.09; range=23–54 years). The majority of participants were men (n=8) and White British (n=9). Participants were recruited from a range of clinical and non-clinical roles (see table 1).

Table 1

Participant demographic characteristics

Qualitative findings

Four core themes were identified: positive perceptions of AI, potential barriers to using AI in healthcare, concerns regarding AI use and steps needed to ensure the acceptability of future AI tools. Anonymised quotes are provided to illustrate findings and we detail whether participants had direct experience of the PROTEUS tool.

Positive perception of AI

AI was generally seen by professionals as having a considerably positive impact on NHS healthcare. Professionals largely viewed AI tools as equipment that would support and guide clinical decision-making, particularly in situations of diagnostic uncertainty. AI tools were also expected to improve diagnostic accuracy, minimise the chance of human error and reduce clinician workload. There was also the suggestion that AI would be able to overcome human limitations, particularly in situations where patients require round-the-clock care.

…the people with the illness especially like long-term illness, they need like 24 hours monitoring which is you can’t do that as a human … so I think it would help…—participant 001, IT technician, had experience of PROTEUS tool.

Clinical participants highlighted that decreasing the time taken to assess and diagnose patients would be of particular importance, as it would ultimately impact the number of patients in a department at a given time and, thus, reduce the burden placed on NHS staff and resources.

…People that come in and need a CT scan for the head injury, they wait two hours for the CT scan, and then they wait another two hours for the CT report and, so potentially that could…half the time that they’re in the department, which makes a huge difference to us in terms of space and resources …that would be…quite a significant gain I think, in terms of pressure and stress in the department because there’s often more than a hundred people in our department which feels…pressurised…—participant 009, doctor, no experience of PROTEUS tool.

When asked about the current perceptions of AI, professionals were, overall, open to its use and suggested that healthcare staff perceptions of AI would steadily improve as they gained more experience and understanding of the tools and their potential benefits. However, some professionals did highlight that younger generations may be more likely to be open to using AI, as they have been exposed to similar tools at a younger age. In contrast, older generations were expected to be more sceptical about the technology, with some never accepting its use in healthcare.

…I think it’s definitely improving … I would make the assumption that it’s individuals on the older side of life, elderly individuals are more sceptical, and we have an ageing population obviously and I think scepticism will decrease naturally because, you know, young people are growing up around Siri and Alexa, it’s not foreign to them…—participant 008, AI developer/researcher, had experience of PROTEUS tool.

Regarding patient perspectives of AI use in their care, professionals reported that, in their experience, patients had few concerns regarding the AI specifically. In the context of clinical research involving AI, issues that were raised by patients were typical of those raised in studies that did not feature AI tools, suggesting that patients do not view AI as a specific risk. Patients were reportedly predominantly concerned about their data privacy, whether taking part in AI research would impact their care and were purportedly happy to participate once reassured by clinicians or medical research staff. From experience, professionals also reported that patients seemed to be generally indifferent to or unconcerned about AI use in their care, as they trust their doctor to make the best decision for them.

…From the patients’ perspective, I think, they are fine. Our experience also was that they would be concerned but, you know, they go to an x-ray machine or MRI, and they do it … they think clinicians put them in there because they think it’s safe, so they trust them. I think as long as clinicians are reassured, they will transfer this reassurance to patients…—participant 012, AI developer/researcher, had experience of PROTEUS tool.

They just want to get better, I think they don’t really care who does it, if it’s a human or if it’s a robot or, if it’s anything AI…—Participant 013, doctor, had experience of PROTEUS tool.

Barriers to using AI in healthcare

There were several barriers that were discussed by professional participants, the majority relating to the limited resources that are currently available to support the implementation of AI tools in the NHS. Those professionals involved in AI research stressed that there are currently numerous data protection hurdles in place that are often lengthy and difficult to navigate when implementing a new AI tool or research study in the NHS. While professionals acknowledged that these are necessary to keep patients and their data safe, the amount of documentation and approval that must be completed was reported to cause delays in the set-up of the AI tools, preventing patients and professionals from reaping potential benefits.

In particular in medical or in healthcare domain…it’s very difficult to access data due to regulations … it takes time. Although the process is fairly streamlined, it takes time to access data, all consent should be sought, all documentation should be signed and all information governance should be briefed…—participant 007, AI developer/researcher, no experience of PROTEUS tool.

Professionals also highlighted considerable inconsistency in NHS IT services across Trusts as a key barrier affecting the healthcare services ability to keep up-to-date with developing technology. Professionals, especially those in IT or AI development/research, reported that sites across the NHS often use different IT systems, many of which are unable to transfer information between systems, and—as a result—were not currently meeting the demands of healthcare professionals.

…Another big problem is systems within the hospital not speaking to each other. So the electronic patient record, doesn’t talk to the radiology system, doesn’t talk to the cardiology system, doesn’t talk to the lab system so there’s sometimes discrepancies, we’re not bad in our Trust but I know other Trusts…have real problems…—Participant 010, Research nurse/practitioner, no experience of PROTEUS tool.

As a result, professionals anticipated that most of the existing NHS computer systems would not be compatible with future AI tools and upcoming research projects would face many delays if these issues could not be resolved. Relating to this, some participants reported that many NHS IT teams are facing growing workloads, restricting the number of updates that can be made at a healthcare trust at a given time, and adding further delays to the implementation of new technologies such as AI.

…Some [NHS] Trusts were, well, they have more people, more capacity to do it. Some Trusts they just have a single person who’s not very great…this is a big problem, it’s a struggle…—participant 007, Research nurse/practitioner, no experience of PROTEUS tool.

Concerns regarding AI use

Some concerns were raised in relation to the use of AI in healthcare. These concerns predominantly revolved around the possible removal of humans from patient care. Professionals described concerns that AI tools may currently lack the empathy needed to sensitively deliver health information to patients and provide the necessary reassurance and support when patients are processing diagnoses. This lack of empathy was thought to have the potential to upset patients, particularly in situations that are especially distressing (eg, new diagnosis of a serious health condition), and as a result, professional participants emphasised that humans would need to continue to have an important role in the delivery of care.

The number one is that people are always reassured when they have personal contact. And it has to be personalised … if it tries to be text-base that’s non-personalised, it doesn’t work. So that’s where the skill of communication is really helpful. And, yeah, you have to be empathetic…—participant 006, doctor, had experience of PROTEUS tool.

Concerns were also raised regarding the capacity for AI tools to make errors and how suitable regulations would need to be developed to ensure these errors were managed or avoided. However, the majority of professionals described that while the accuracy of AI tools was a risk, clinicians and patients could be reassured by increasing the supporting evidence for AI tools and ensuring the tool is tested across diverse populations.

I think for anyone participating in research and seeing that and just knowing … how much experience there has been using that tool, I think that’s going to reassure people. How much it’s been tested before … it’s important for them to know just how much … or whether [it’s] already been used and their experience of it…—participant 003, research nurse/practitioner, had experience of PRTOEUS tool.

A small number of professionals reported concerns that there was a potential for clinicians to be deskilled by the future uptake of supporting AI tools. While some professionals did describe that there was the potential for junior doctors who use AI tools early in their career to be less skilled in certain areas, this was not predominantly thought to be a risk to patient care. Many professionals also reported that clinical training should be updated to reflect developing technology including AI, ensuring that clinicians remain highly skilled both with and without the AI tools.

… The Royal College of Medicine and Anaesthetics …we have this sort of, supposedly, robust and comprehensive assessment you know, … the formal exams to workplace band etcetera so I suppose they’re trying to mitigate that by making sure we do understand the fundamentals, by making us do on the job assessments and more formal or classical exams as well.—participant 009, doctor, no experience of PROTEUS tool.

There was also a specific concern raised relating to resource strain. With increased AI use, it was thought likely by professionals that more patients could be identified as having health problems and would require clinical intervention. While many professionals believed that this would reduce the use of some NHS resources, it was suggested that the current strain would—in fact—move to other areas of patient care. This was of particular concern, as the NHS was considered a very delicately balanced system that could become overwhelmed easily. While it is currently unclear if this resource shift could occur, it was identified as a possible risk that should be considered and attended to as AI tools are further implemented in the NHS healthcare settings.

It depends on the results doesn’t it. So if the results are fantastic and we do find that (the AI tool is) a predictive model, I think the temptation would be to send as many patients as possible through…who could get a stress echo. [I’m] slightly… concerned … it’s going to result in this huge influx of resource use…—participant 005, Research nurse/practitioner, had experience of PROTEUS tool.

Ensuring acceptability of AI tools

Transparency and communication were a central idea raised by professionals in relation to making AI tools acceptable to both clinical and non-clinical healthcare staff as well as patients. Professionals working in a research role emphasised that having technological support, including the opportunity to ask questions and raise concerns about the AI tool with developers, was particularly reassuring for healthcare staff and could help ensure a smooth set up and roll out of AI tools.

Within that making sure it’s as easy to use, and it’s as quick and fast and streamlined as possible, … having that flexibility, and whether that’s having huge amount of technical support early on because that’s where, as I’m sure you know, NHS IT is awful. Wherever you go it’s always (bad). Yeah so, I think…it’s having support … Having support for the technical aspect is really helpful.—participant 005, research nurse/practitioner, had experience of PROTEUS tool.

Fostering trust in AI tools was another key issue described by professionals who stressed the importance of AI developers being open about how the tools were developed, the data that would be needed, how data would be securely stored and managed. By being transparent about these processes, professionals described that this would create greater trust between clinicians, patients and AI developers, which could facilitate the implementation of AI tools in NHS healthcare settings. One suggestion to ensure this transparent communication was to create specific stakeholder special interest groups across the NHS to provide the opportunity to discuss AI tools and other technical implementation aspects. By developing these groups, it was thought that this could provide the opportunity for individuals—both patients and clinical/non-clinical healthcare staff—affected by or interested in AI tools to raise specific concerns or questions and allow for acceptable adjustments to implementation procedures to be made.

Possibly connect with you and tell you what they need maybe, what they’re looking for. What their concerns are and vice versa you could connect with them … where you can say this is what we’re available to sort of give you, how would this work for your needs, because people need to talk…—participant 010, research nurse/practitioner, no experience of PROTEUS tool.

Discussion

The aim of this qualitative study was to examine the perceptions of clinical and non-clinical professional stakeholders about their experiences and perceptions of using an AI tool in an NHS setting; their beliefs about the possible implications using AI tools may have; and views about potential facilitators/barriers to engagement to using AI tools in patient care within an NHS context. We identified four core themes: positive perceptions of AI, potential barriers to using AI in healthcare, concerns regarding AI use and the potential steps needed to ensure the acceptability of future AI tools.

Overall, professionals reported being open to the idea of using AI in patient care and those who had previous experience utilising AI tools described how they had come away with a positive outlook on AI tools. Across the professional participants, AI was considered a supportive piece of technology that could assist healthcare professionals and benefit clinicians and patients alike. However, central to this view was that the AI tool must fit their workflow and not cause added or unnecessary strain on NHS services. There was also a central belief that clinicians would need to continue to be involved in patient care to provide appropriate empathetic support. Therefore, in this study, AI was anticipated to be primarily an assistive tool, and there was little expectation that it would ever fully replace human clinicians. This is consistent with previous research involving NHS professionals, such as Morrison,21 who found that AI tools were perceived to be able to provide assistance to clinical staff and had the potential to improve their working life. Similarly, Morrison21 reported that it was unlikely that human doctors would be replaced by AI technology; however, this would likely remain a concern as more AI tools are used.

Participants in this study did raise concerns about AI tools relating to data protection and ensuring patient privacy. This was of particular importance to professional participants involved in research, who reported that patients required reassurance that any data that are collected would be used appropriately and kept safely. However, professionals believed that patient uncertainty would likely reduce as the amount of supporting research for AI tools increased over time. Concerns about data security are not unique to AI tools however and have been found consistently in studies investigating the acceptability of various technological advances in healthcare over the years, including the use of emails, mobile healthcare applications and remote healthcare consultations.22–24

This study also identified other potential barriers to the uptake of AI tools, which related to the potential consequences of AI use in healthcare, including the potential for errors and increased demand for limited healthcare resources. As using AI tools in healthcare comes with the possibility for machine error (eg, diagnostic error), this could negatively impact not only the patient but also the trust patients have in the healthcare professionals involved in their care.25 26 Previous studies describe that, as the public has a relatively low tolerance for machine errors25 ensuring future AI tools have a human override feature and that this is clearly communicated will likely be important for fostering trust. Additionally, while it was not identified as a significant risk, participants in the present study acknowledged that there is the potential for clinicians who use AI tools early in their careers to become deskilled if medical training does not update with the technology. This is reflective of existing literature27 that has also identified ‘deskilling’ as something to be mitigated where possible to ensure acceptability within clinical staff as well as patient safety.

When examining these results in the specific context of the NHS, there are concerns that have been raised that may potentially be unique to this healthcare system. For example, if the use of AI tools does increase the demand for healthcare services, it is likely this would cause further strain on a system that is already experiencing multiple difficulties relating to resources and workforce.12 As the NHS is a publicly funded system, its funding and resources are relatively restrictive, therefore, any changes in demands for resources could impact how care is given to patients, thus affecting their overall experience. In contrast, more privately-funded systems such as those that are common in the USA, are arguably more adaptable to changes in demand, as funding for services is provided as they are utilised by patients. Therefore, it is likely that funding for these services increases along with patient demand, allowing them to adapt accordingly.

Recommendations for future AI studies

Several structural barriers to implementing AI tools in healthcare were identified in this study, including difficulties navigating data protection policies and the limited IT infrastructure in many NHS services. Going forwards, for future AI development and adoption to be successful, there is a need for a robust IT governance strategy across NHS services to ensure AI tools can be smoothly and successfully integrated.25 28–32 Given the disparity in resources and IT capacities across NHS Trusts throughout the UK,33 national efforts and investment may be needed in future to increase capacity for AI deployment to facilitate the widespread adoption of AI tools and help promote better healthcare equality.31 34

When discussing approaches to improve the acceptability of AI tools, participating professionals highlighted the importance of transparency between AI developers and their intended users. Offering training to improve clinical care teams’ understanding of AI technology, the potential risks/benefits, as well as secure data management may further increase clinician’s receptiveness to future AI tools and increase their ability to discuss AI tools with patients.35–37 In a similar vein, NHS sites receiving timely, practical technological support from developers when setting up a new AI tool in a healthcare context, including clear instructions and a point of contact to raise concerns with, may also be important when operationalising AI technologies into the NHS healthcare system. Developers should be mindful that a considerable proportion of NHS Trusts will have small IT teams or have less up-to-date facilities and these sites may need more time to implement new technologies and will potentially require more support than other sites.

In relation to patients’ perceptions of AI tool acceptability, the results of this study highlight the perceived concerns that patients could have about AI tools being used in their care and the importance of AI tools being human centred. It was suggested by participating professionals that special interest groups may be especially valuable in future AI research. These groups may provide opportunities for patients and other stakeholders (eg, caregivers, frontline clinical teams, ethicists, etc) to raise any questions/concerns and help developers ensure adequate information is provided in future to healthcare teams, patients and their caregivers. Participants in this study also highlighted the need for future longitudinal studies of AI tools in order to provide additional evidence and reassurance to patients about a tool’s safety and efficacy in a variety of populations over time.

Strengths and limitations

This study has several strengths and limitations. A strength is that we interviewed a range of professionals (clinical and non-clinical) with a variety of lived experiences, expertise and beliefs about AI in healthcare. This sample size allowed for a detailed analysis of the data and provided an in-depth understanding of how AI fits in the context of the NHS healthcare, the barriers/facilitators to uptake and recommendations on approaches that may improve AI acceptability and implementation. Nonetheless, a limitation of this study is the opportunity sampling strategy used and the limited demographic diversity of the sample (eg, most participants were men and White British). It would be valuable for future studies to include the views of other key stakeholders, such as information governance staff. A number of difficulties were experienced in recruiting NHS healthcare staff, primarily as it was not possible for researchers to successfully make contact with staff, and it is possible that this reflects the fact that recruitment took place during the course of the COVID-19 pandemic, which heavily impacted NHS healthcare staff workloads. We broadened our recruitment approach in response, which led to the recruitment of a more diverse professional stakeholder sample, including not only clinicians but AI developers/researchers with experience of using AI tools in NHS healthcare settings. Finally, it was beyond the scope of this study to include patient participants. The views of patients are a key component in ensuring AI tools used in healthcare settings are feasible and acceptable and, while speculation was made by participants about potential patient views, these opinions have been made from a very different position than actual patients going through treatment, and, therefore, future research is needed to explore patient experiences and beliefs about AI tools in their care.

Conclusion

Despite these limitations, this study adds to the limited literature about the experiences and perceptions of clinical and non-clinical professional stakeholders of using AI tools in patient care in NHS settings. We identified several practical barriers to implementation, including a disparity in NHS IT capacities across trusts and difficult to navigate organisational permissions as well as concerns that could act as barriers to engagement (eg, concerns about misdiagnosis, the potential for AI tools to de-skill clinicians). Nonetheless, a number of positive implications for using AI in healthcare were also found, including the ability for AI tools to improve diagnostic accuracy and reduce clinician workload. As efforts to expand the role of AI tools in healthcare settings increase, the recommendations made by this study about the importance of ensuring transparency and trust regarding data storage/sharing and having readily available technological support from developers may ensure future AI tools are effectively implemented and benefit both patients and clinical care teams.

Data availability statement

Data are available upon reasonable request. Data are available from corresponding author on reasonable request.

Ethics statements

Patient consent for publication

Ethics approval

Ethical approval was received for this study from the University of Oxford Medical Sciences Interdivisional Research Ethics Committee (R77627/RE001); HRA and Health and Care Research Wales (HCRW) (21/NW/0199) and the NHS North West - Preston Research Ethics Committee (21/NW/0199). Participants gave informed consent to participate in the study before taking part.

References

Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

Footnotes

  • Twitter @VWilliamson_psy

  • Contributors All authors: contributed to the conception, planning and design of the study; contributed towards participant recruitment; reviewed and approved the manuscript. CAF and VW: collected, interpreted and analysed the data; drafted the manuscript for publication. VW is guarantor.

  • Funding Funding for this research was provided by National Institute for Health and Care Research (AI in Healthcare Award, Grant Number AI_AWARD01833).

  • Competing interests PL is a founder and shareholder of Ultromics Ltd and is an inventor on patents in the field of AI and healthcare. Ultromics Ltd uses Artificial Intelligence to build solutions that help meet the unmet needs of cardiovascular medicine, including EchoGo Pro, the Medical Device used in PROTEUS, the clinical trial associated with this qualitative research project.

  • Patient and public involvement Patients and/or the public were involved in the design, or conduct, or reporting, or dissemination plans of this research. Refer to the Methods section for further details.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.