Article Text

Download PDFPDF

Original research
Barriers to and facilitators of clinician acceptance and use of artificial intelligence in healthcare settings: a scoping review
  1. Catherine E A Scipion1,
  2. Margaret A Manchester1,
  3. Alex Federman2,
  4. Yufei Wang1,
  5. Jalayne J Arias1
  1. 1Department of Health Policy and Behavioral Sciences, Georgia State University School of Public Health, Atlanta, Georgia, USA
  2. 2Division of General Internal Medicine, Icahn School of Medicine at Mt. Sinai, New York City, New York, USA
  1. Correspondence to Dr Catherine E A Scipion; cscipion1{at}gsu.edu

Abstract

Objectives This study aimed to systematically map the evidence and identify patterns of barriers and facilitators to clinician artificial intelligence (AI) acceptance and use across the types of AI healthcare application and levels of income of geographic distribution of clinician practice.

Design This scoping review was conducted in accordance with the Joanna Briggs Institute methodology for scoping reviews and reported using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews guideline.

Data sources PubMed and Embase were searched from 2010 to 21 August 2023.

Eligibility criteria This scoping review included both empirical and conceptual studies published in peer-reviewed journals that focused on barriers to and facilitators of clinician acceptance and use of AI in healthcare facilities. Studies that involved either hypothetical or real-life applications of AI in healthcare settings were included. Studies not written in English and focused on digital devices or robots not supported by an AI system were excluded.

Data extraction and synthesis Three independent investigators conducted data extraction using a pre-tested tool meticulously designed based on eligibility criteria and constructs of the Unified Theory of Acceptance and Use of Technology (UTAUT) framework to systematically summarise data. Subsequently, two independent investigators applied the framework analysis method to identify additional barriers to and facilitators of clinician acceptance and use in healthcare settings, extending beyond those captured by UTAUT.

Results The search identified 328 unique articles, of which 46 met the eligibility criteria, including 44 empirical studies and 2 conceptual studies. Among these, 32 studies (69.6%) were conducted in high-income countries and 9 studies (19.6%) in low-income and middle-income countries (LMICs). In terms of the types of healthcare settings, 21 studies examined primary care, 26 focused on secondary care and 21 reported on tertiary care. Overall, drivers of clinician AI acceptance and use were ambivalent, functioning as either barriers or facilitators depending on context. Performance expectancy and facilitating conditions emerged as the most frequent and consistent drivers across healthcare contexts. Notably, there were significant gaps in evidence examining the moderator effect of clinician demographics on the relationship between drivers and AI acceptance and use. Key themes not encompassed by the UTAUT framework included physician involvement as a facilitator and clinician hesitancy and legal and ethical considerations as barriers. Other factors, such as conclusiveness, relational dynamics, and technical features, were identified as ambivalent drivers. While clinicians’ perceptions and experiences of these drivers varied across primary, secondary and tertiary care, there was a notable lack of evidence exclusively examining drivers of clinician AI acceptance in LMIC clinical practice.

Conclusions This scoping review highlights key gaps in understanding clinician acceptance and use of AI in healthcare, including the limited examination of individual moderators and context-specific factors in LMICs. While universal determinants such as performance expectancy and facilitating conditions were consistently identified across settings, factors not covered by the UTAUT framework such as clinician hesitancy, relational dynamics, legal and ethical considerations, technical features and clinician involvement emerged with varying impact depending on the level of healthcare context. These findings underscore the need to refine frameworks like UTAUT to incorporate context-specific drivers of AI acceptance and use. Future research should address these gaps by investigating both universal and context-specific barriers and expanding existing frameworks to better reflect the complexities of AI adoption in diverse healthcare settings.

  • Artificial Intelligence
  • Physicians
  • Behavior

Data availability statement

Data sharing not applicable as no datasets generated and/or analysed for this study.

http://creativecommons.org/licenses/by-nc/4.0/

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

STRENGTHS AND LIMITATIONS OF THIS STUDY

  • This scoping review applied predefined eligibility criteria and systematic searches (combining database searches, backward and forward citation tracking, and reference lists and similar articles screening), ensuring rigour and replicability of the findings.

  • This scoping review examined patterns in clinician artificial intelligence (AI) acceptance across diverse economic and healthcare contexts by categorising findings by AI application types and income levels of study settings.

  • The generalisation of the study may be limited in the trend analysis due to the search strategy, which was restricted to two databases, open-access materials and English-written articles, potentially excluding relevant studies from grey literature and low-income and middle-income countries.

  • Some sources were excluded from the trend analysis due to insufficient geographic information, potentially limiting the study’s ability to capture regional variability in barriers and facilitators.

Introduction

Artificial intelligence (AI)-driven tools and methods (ie, machine learning, deep learning, natural language processing) are positioned to advance healthcare by improving diagnostic, screening and therapeutic capabilities, as well as allowing disease prediction and monitoring.1 However, integrating these technologies into clinical care remains challenging, largely due to resistance to change and the multifaceted dynamics that characterise healthcare settings.2 The distinct characteristics and clinical goals of primary, secondary, tertiary and quaternary care settings are likely to influence the adoption and integration of AI tools in practice. Contextual factors specific to these settings play a significant role in determining how AI is used, underscoring the need to consider healthcare settings when exploring barriers and facilitators to AI adoption.

Clinicians, as frontline professionals across diverse healthcare settings, play a pivotal role in decision-making and act as key gatekeepers in the integration and use of AI tools. Their perspectives on barriers and facilitators of AI acceptance and use are likely to influence whether and how AI tools are adopted and integrated into clinical care across settings. Additionally, the context of the geographic region of practice of clinicians, including policy frameworks and cultural nuances, further impacts AI adoption.3 4 For example, a global survey revealed that clinicians in resource-abundant settings exhibited lower acceptance of AI diagnostic tools compared with those in resource-constrained environments.3 These regional and contextual influences highlight the complexity of clinician AI acceptance and the need for comprehensive models to study it.

Prior research and scholarship have proposed theoretical and conceptual models to characterise the barriers to and facilitators of clinician acceptance and use of novel technologies in healthcare. Among these, the Unified Theory of Acceptance and Use of Technology (UTAUT) stands out for its comprehensive approach. UTAUT integrates components from eight prominent models, including the Theory of Reasoned Action, Technology Acceptance Model (TAM), Motivational Model, Theory of Planned Behavior (TPB), Combined TAM and TPB, Model of Personal Computer Utilization, Innovation Diffusion Theory and Social Cognitive Theory. This integration allows UTAUT to offer a holistic perspective by addressing both intrinsic and extrinsic factors that drive technology acceptance and use. Compared with individual models, UTAUT offers superior explanatory power and a broader scope, making it particularly effective in identifying key drivers of acceptance and behavioural intentions. While the individual models often focus on specific constructs or narrow contexts, UTAUT synthesises their strengths to provide a unified framework. The model incorporates four core constructs—performance expectancy, effort expectancy, facilitating conditions and social influence. Unlike other models, UTAUT uniquely accounts for the moderating effects of user demographics such as gender, age, voluntariness and experience.5 These features make UTAUT especially relevant for understanding clinician AI acceptance and use as it captures the interplay of personal and contextual factors.

A preliminary search was conducted to identify existing reviews, including scoping and systematic reviews, on barriers to and facilitators of clinician AI acceptance and use in healthcare. While the literature on AI applications in healthcare has grown substantially, with a 5.12% annual increase in publications over the past 28 years,6 significant gaps remain. Most reviews failed to account for the geographic distribution of clinician practices7–10 or the types of healthcare settings.7 9 11 Some reviews focused narrowly on either a single healthcare setting type10 12 13 or a specific AI method.8 9 12 While a few reviews reported on various regions of clinician practice11 12 or healthcare settings,8 they did not examine variability in barriers and facilitators based on regional or contextual differences. A more recent scoping review provided a broader approach by considering diverse AI methods, healthcare settings and geographic distributions of clinician practice.14 However, it did not specifically address the variability in barriers and facilitators influencing clinician acceptance and use of AI across regions or contextual differences in practice settings. This highlights a critical knowledge gap regarding how clinician perspectives vary across different healthcare settings and geographic regions, particularly in the context of income-level disparities. This underscores the need for a comprehensive review of the literature in the area.

The investigators used a scoping review to systematically map the evidence on barriers to and facilitators of clinician acceptance and use of AI in healthcare. This review examined these factors and identified their patterns across various healthcare settings (ie, primary, secondary, tertiary and quaternary care) and levels of income of the geographic regions where clinicians practise (low-and-middle-income vs high income). By summarising findings using the UTAUT framework, this review provides a comprehensive understanding of the current context-specific landscape, identifies knowledge gaps and proposes areas for future research aimed at guiding the development of targeted strategies to enhance AI adoption and use in clinical practice among clinicians.

Review questions

This scoping review sought to answer the following questions:

  • What are the existing trends in the barriers to and facilitators of clinician AI acceptance and use across different types of healthcare settings (primary, secondary, tertiary)?

  • How do these trends manifest across regions with varying income levels (high-income vs low-to-middle-income)?

  • What gaps exist in the current literature regarding these trends, and how can future research address them to inform the effective integration of AI into diverse healthcare contexts?

Methods

Design

This scoping review was conducted in accordance with the Joanna Briggs Institute (JBI) methodology for scoping reviews15 and reported using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews checklist16 (see online supplemental material 1). The protocol of this scoping review is submitted as online supplemental material 2.

Inclusion criteria

The authors followed the Participants, Intervention, Comparator, Outcome, Setting and Time period (PICOST) framework, integrating core elements of the Participant, Concept and Context (PCC) framework as defined in the JBI methodology for scoping reviews to inform the inclusion/exclusion criteria of this scoping review (see online supplemental material 3). The concept in the PCC framework includes interventions and/or phenomena of interest. The concept may also include the outcomes of interest of the scoping review,15 but here we opted to separate them to follow the PICOST framework.

Participants

This scoping review included sources of evidence whose participants are physicians, including primary care providers (eg, paediatricians, internists, nurses) and specialists of any medical or surgical specialty. Participants included paramedical personnel who are gatekeepers of AI in clinical care (eg, physiotherapists, imagery technologists), as well as physicians at any educational or professional level (eg, physician seniors, attendings, medical students, fellows, interns or residents).

Concept/intervention

This scoping review focused on either hypothetical or real-life applications of AI in healthcare settings. This included any AI-driven tools for diagnosis, treatment decision support, screening or patient monitoring. Papers that focus on digital devices or robots not supported by an AI system were excluded.

Context

This scoping review included sources of evidence that focused on any healthcare settings (primary, secondary, tertiary, quaternary care) and conducted in any regions/countries of any income level region. The World Bank country classification by income level for 2024–2025 was used to classify the regions.17

Comparator

Not applicable.

Outcome

This scoping review focused on barriers to and facilitators of clinician acceptance and use of AI healthcare application.

Study design/types of sources

This scoping review included empirical studies of any design (qualitative studies, surveys, quantitative studies, case studies) and conceptual studies. This approach allowed a systematic mapping of the sources of evidence that focus on barriers to and facilitators of clinicians’ AI acceptance and use in healthcare settings.

Time period

This scoping review focused on studies exploring clinician acceptance and use of AI following the implementation of AI systems in clinical care, regardless of the duration (in case of real-life AI applications).

Data sources and searches

The search strategy aimed to locate peer-reviewed publications. The authors employed a three-step search strategy for this review. First, two independent investigators (CEAS, MAM) conducted an initial limited search of MEDLINE (PubMed) (http://www.nlm.nih.gov.ezproxy.u-pec.fr/bsd/pmresources.html) and Embase (https://www.embase.com) to identify articles on the topic, particularly reviews. The text words contained in the titles and abstracts of relevant articles, and the index terms used to describe the articles informed the development of a full search strategy, tailored for PubMed and Embase and validated by the senior author (JJA). Investigators searched using a variation of the following terms: clinicians, acceptance and healthcare settings. The search was limited to human studies, and only sources of evidence published in English were considered for inclusion.

While expanding the search to additional databases or grey literature sources could enhance comprehensiveness, this was not feasible due to time and resource constraints. Given these limitations, two relevant databases were selected. PubMed, maintained by the U.S. National Library of Medicine, provides extensive coverage of biomedical literature, including studies on AI applications in healthcare. Embase complements this by offering broad international biomedical research coverage, particularly in pharmacology and medical technology, and includes studies not indexed in PubMed or other databases. To mitigate potential limitations in search scope, supplemental strategies such as backward and forward citation tracking were employed to identify additional relevant studies beyond the primary database searches. Additionally, reference lists of selected studies and related articles in databases were reviewed to further ensure the inclusion of pertinent literature.

The search focused on articles published from 2010 to 21 August 2023 (when the search was conducted). The publication date from 2010 onwards was considered to ensure relevance to the current landscape of AI in healthcare. This timeframe aligns with a marked increase in AI-related publications beginning in 2010, reflecting the growing application and development of AI technologies in healthcare during this period.6 The full search terms are presented in online supplemental material 4.

A supplemental search was conducted on 17 July 2024 using the same strategies and processes to identify newly published studies that could potentially influence the interpretation of results. This search aimed to identify any shifts in the interpretation of results, allowing the scoping review to integrate new insights that could shape future research directions. However, the findings indicated that newly published studies did not alter the outcomes of our analysis. To ensure methodological consistency and minimise potential biases, study inclusion was restricted to publications up to 21 August 2023. This approach maintains a coherent data set, ensuring a rigorous mapping of factors influencing clinician AI adoption, while preserving alignment with the initial search timeline and reinforcing the transparency and replicability of the review process.

Study selection

After completing the search, two investigators (CEAS and MAM) compiled and imported identified citations into Microsoft Excel (V.2501), where they systematically removed duplicate entries. Following a pilot test, two independent reviewers (CEAS and MAM) assessed the titles and abstracts to determine their relevance based on the predefined inclusion criteria. Full-text versions of potentially eligible studies were retrieved, with citation details imported into Zotero (V.6.0.27). Three authors (CEAS, MAM, YW) independently assessed the full texts of all initially eligible articles identified in the searches to select relevant publications for inclusion in this review. Reasons for the exclusion of sources of evidence at full text that do not meet the inclusion criteria were recorded and reported. Any discrepancies in study selection were resolved through discussion.

Data extraction

The research team independently reviewed an initial five articles, discussed the results, then designed a data extraction form through REDCap (V.15.1.1), a customised web-based informatics system. The data extraction tool was customised to include the constructs of the UTAUT framework to summarise main findings. The definition of each construct and specified role of key moderators are summarised in table 1. To ensure consistency and capture all relevant data, the investigators pilot-tested the data extraction form by reviewing the same five preliminary articles.

Table 1

Definition and influence of Unified Theory of Acceptance and Use of Technology’s constructs and the role of key moderators5

Overall, using the predesigned form (online supplemental material 5), investigators extracted data on source of evidence characteristics (eg, year of publication, type of study), type of AI-based system described in the article (eg, category or nature of AI system, purpose of the system), description of research participants (eg, position of clinicians, category of clinicians), barriers to and facilitators of clinician AI acceptance and use (performance expectance, effort expectancy, facilitator conditions and social influence), clinician demographics and their moderator effects, and other barriers and facilitators not captured by UTAUT.

Data analysis

Two independent investigators (CEAS and MAM) applied the framework method18 to analyse and summarise additional barriers to and facilitators of clinician acceptance and use in healthcare settings beyond those captured by UTAUT. The scientific software Atlas.ti V.25 was used to support data analysis. Discrepancies were solved by the senior authors (JJA). These data were defined as emerged barriers/facilitators and grouped them according to six main codes: physician involvement, conclusiveness, clinician hesitancies, legal and ethical considerations, relationship dynamics and technical features.

Patient and public involvement

None.

Results

Search results

Figure 1 shows details about the selection process, including the decision matrix of full-text assessment. The searches for electronic databases yielded 584 articles, of which 27 were identified through supplementary strategies. After removing duplicates, 328 citations remained. Based on title and abstract reviews, 257 articles were excluded, and 71 full-text articles were to be retrieved and assessed for eligibility. Of these 71 articles, 6 full texts were unavailable. Thus, investigators entered 65 articles on REDCap for full-text assessment. Of the 65 articles, 19 were irrelevant to this review. The most common reasons for exclusion were irrelevant intervention, that is, technology healthcare application (eg, digital devices or robots) unsupported by AI system (n=12), irrelevant outcomes, that is, lack of focus on barriers to and facilitators of clinician AI acceptance and use (n=4) and full text written in a language other than English (abstract was in English) (n=3). The remaining 46 studies were considered eligible and included in this review.

Figure 1

Selection process of the sources of evidence.

Characteristics of the sources of evidence

The study yielded 46 articles that were included in the analysis (table 2). Of these articles, 44 were empirical and two were conceptual, generally distributed across all years. Among the included studies, 32 were conducted in high-income countries (HICs), nine involved low-income and middle-income countries (LMICs), four had a global scope and one did not specify the region of clinician practice. Of the nine LMIC-inclusive studies, only three were conducted exclusively in LMICs, while the remaining six encompassed both HICs and LMICs. Regarding healthcare settings, 21 studies focused on primary care, 26 on secondary care and 21 on tertiary care. The predominant applications of AI were for diagnostic purposes (22 studies) and treatment-related decision-making (20 studies). Study participants were primarily doctors (42 studies), with the majority being attending physicians (41 studies), followed by nurses (20 studies). Finally, only one study explored the moderating effect of clinician demographics on the relationship between barriers to and/or facilitators of acceptance and use of AI. Online supplemental material 6 details the characteristics of each source of evidence.

Table 2

Characteristics and focus of the sources of evidence

Barriers and facilitators to clinician AI acceptance and use captured by UTAUT

This scoping review identified both barriers and facilitators through the UTAUT framework, specifically performance expectancy,3 4 7–13 19–53 effort expectancy,7 8 10–12 23 24 27 30 34 36 37 39 41 42 49 social influence3 7 8 10 11 19 25 27 34 37 45 and facilitating conditions.3 4 7 8 10–13 20 22–25 28 30 32 34 36–39 41 43 46 47 50 51 53 54

  • Performance expectancy reflects clinicians’ belief that AI can enhance efficiency, accuracy and productivity, making tasks easier and improving patient outcomes.

  • Effort expectancy pertains to the perceived ease of use, where AI is more readily adopted if it is intuitive, minimally disruptive and seamlessly integrated into clinical workflows.

  • Social influence encompasses peer, institutional and leadership support, where endorsement from key stakeholders fosters AI acceptance.

  • Facilitating conditions refer to the availability of training, resources and technical assistance, ensuring clinicians have the necessary infrastructure for effective implementation.

Across studies, these factors functioned as ambivalent drivers of AI adoption. Clinicians perceived or experienced them as either barriers or facilitators, depending on whether they were viewed as challenges requiring resolution (barriers) or features warranting optimisation (facilitators). Performance expectancy and facilitating conditions were the most frequently cited drivers across levels of healthcare (primary, secondary and tertiary care) and regions of practice based on income level (HICs and LMICs) (see figures 2 and 3). Online supplemental material 7 provides a detailed breakdown of findings from each source of evidence.

Figure 2

Barriers to and facilitators of clinician artificial intelligence (AI) acceptance by the types of healthcare application. The horizontal bars represent the number of studies that reported or discussed the constructs captured by the Unified Theory of Acceptance and Use of Technology framework. The light blue bars depict social influence, the green ones are facilitating conditions, the orange bars are effort expectancy and the bold blue bars are performance expectancy. The constructs are reported based on AI healthcare applications (tertiary, secondary and primary care). The first box above includes barriers, and the second box below includes facilitators of AI acceptance and use in clinical care.

Figure 3

Barriers to and facilitators of clinician artificial intelligence (AI) acceptance and use by the level of income of geographic distribution where the sources of evidence were conducted. The horizontal bars represent the number of studies that reported or discussed the constructs captured by the Unified Theory of Acceptance and Use of Technology framework. The light blue bars depict social influence, the green ones are facilitating conditions, the orange bars are effort expectancy and the bold blue bars are performance expectancy. The constructs are reported based on the level of income of geographic distribution of the sources of evidence—low and middle income and high income (World Bank country classifications by income level for 2024–2025).17 The first box above includes barriers, and the second box below includes facilitators to AI acceptance and use in clinical care.

Emerged barriers and facilitators not captured by UTAUT

In addition to UTAUT constructs, 24 studies identified additional factors influencing clinician AI acceptance (see figure 4). See online supplemental material 7 for more details about the main findings of each source of evidence. These factors included both event-based concerns and perceptions in the abstract. Overall, key themes emerged in our analysis, including clinician involvement as facilitator13 34 39 and clinician hesitancy10 11 23–25 29 30 34 52 53 55 and legal and ethical considerations3 22 26 27 30 52 as barriers. Other factors—conclusiveness,8 11 13 22 29 31 32 34 39 51 54 relationship dynamics10 22 23 26 28 34 44 54 and technical features32 54 —functioned as ambivalent drivers depending on the context. For instance, AI technical features acted as barriers when poor system design and lack of interoperability disrupted workflows but served as facilitators when well-engineered, interoperable systems enhanced usability and integration.

Figure 4

Emerged barriers to and facilitators of clinician artificial intelligence (AI) acceptance and use not captured by the Unified Theory of Acceptance and Use of Technology (UTAUT) framework. The horizontal bars represent the number of reviewed studies that reported or discussed the emerged factors not captured by the UTAUT framework (clinician involvement, technical features, legal and ethical considerations, relationship dynamics, conclusiveness, clinician hesitancy). The orange bars are barriers, and the bold blue bars are facilitators of clinician AI acceptance.

Research on clinician AI adoption in LMICs remains scarce, with findings suggesting no unique barriers distinct from those in HICs. Among the available evidence, only two studies8 55 identified conclusiveness and clinician hesitancy as barriers within LMICs clinical practice; and three LMIC-exclusive studies identified no additional factors beyond UTAUT.19–21 This highlights the need for further investigation into context-specific determinants of AI adoption in resource-constrained settings.

Variations by levels of healthcare settings

Clinicians’ perception and/or experiences of AI emerging drivers varied across the types of healthcare settings. This scoping review identified 13 studies, with 5 exclusively examining primary care,22–25 51 4 focusing on secondary care,13 26–28 and 4 addressing tertiary care,29–32 without overlapping between settings.

Relationship dynamics emerged as a key concern in primary and secondary care but were absent in tertiary care

  • Primary care clinicians expressed apprehension that AI could diminish direct patient interactions, potentially eroding the humanistic aspects of care and compromising healthcare empathy.22

  • Secondary care concerns shifted towards the interaction between clinicians and AI systems themselves, particularly regarding trustworthiness—defined as the system’s perceived transparency, consistency and alignment with clinical reasoning. This included the level of confidence clinicians could place in AI-driven decision-making processes.26 28

Legal and ethical concerns also varied

  • Primary care prioritised patient safety and AI-related harm (eg, misdiagnosis).22

  • Secondary care emphasised data privacy and security risks.26 27

  • Tertiary care focused on accountability, liability distribution, and regulatory gaps in AI-driven diagnostics.30

Clinician hesitancy emerged as a key concern in primary and tertiary care but was absent in secondary care

  • Primary care clinicians feared job displacement as AI automated routine decision-making.23 25 51

  • Tertiary care clinicians worried about loss of autonomy, over-reliance on AI, and skill devaluation.29 30

Finally, clinician involvement in AI design, implementation and validation emerged as a key facilitator in secondary care, fostering greater trust and adoption.13 Technical features were primarily linked to the system design quality and interface interoperability in tertiary care.32 Concerns about AI conclusiveness—including robustness and reliance on evidence-based recommendations—were consistent across all healthcare settings, serving as both a critical enabler of AI adoption13 31 32 and a source of clinician scepticism.22 29 31 51

Discussion

Summary of evidence

This scoping review synthesised 46 studies to examine barriers to and facilitators of clinician AI acceptance and use across economic and healthcare contexts. While UTAUT constructs were widely reported, their influence remained ambivalent. Performance expectancy and facilitating conditions emerged as the most frequently cited factors across studies. However, a notable gap exists in research on the moderating role of clinician demographics in AI adoption. Beyond UTAUT, additional drivers—including relationship dynamics, legal and ethical considerations, clinician hesitancy, clinician involvement, technical features and conclusiveness—highlight the context-dependent nature of AI adoption. The under-representation of LMICs in literature limits understanding of AI implementation in resource-constrained settings.

Findings in the context of existing literature

Our findings align with prior reviews that highlight the ambivalence of AI adoption drivers56 57—the same factors may act as barriers or facilitators depending on context. Performance expectancy and facilitating conditions emerged as the most frequently reported drivers of AI adoption, consistent with other reviews.58 These factors may represent universal determinants of clinician AI acceptance, transcending healthcare settings and economic contexts.

Consistent with prior reviews,59 our findings reveal that UTAUT overlooks critical factors influencing clinician AI acceptance and use in healthcare settings. Venkatesh et al introduced UTAUT2 to address intrinsic user factors; however, this extension focuses on consumer adoption rather than professional decision-making.60 As such, it would be inappropriate to apply UTAUT2 in the medical settings as it does not consider the dynamics and culture of an employee’s (here, clinician’s) environment. Thus, this review further highlights the need to understand and create a cohesive model for clinical adoption that considers clinicians’ perspective on acceptance of AI tools for healthcare applications.

While previous reviews have documented the growing research on AI in LMICs,61 62 our findings expose a critical gap in evidence regarding emerging adoption drivers in these settings. Previous reviews have pointed to infrastructure limitations, data scarcity and trust deficits as key challenges in LMICs.63 However, unlike prior work, this review extends its scope to hypothetical AI applications and conceptual studies, offering a broader perspective on AI adoption in LMICs. The under-representation of LMICs in empirical research restricts insights into context-specific determinants, necessitating further investigation.

Variations by levels of healthcare settings

Relationship dynamics vary across primary and secondary care settings

While prior reviews identified relational factors—such as patient–doctor dynamics and AI–physician interactions—as barriers to AI adoption in healthcare,64 our review significantly advances the field by contextualising these issues across primary and secondary care settings. In primary care, fostering long-term, trust-based relationships with patients and their families is essential, relying heavily on interpersonal communication to support empathetic, patient-centred and continuous care.65 In contrast, secondary care is defined by specialised interventions for patients typically referred from primary care. Advanced technologies, including AI-driven systems, are frequently integrated to support diagnostic and therapeutic decision-making.66 However, the inherent black-box nature of many AI decision support systems undermines clinician trust in secondary care settings as concerns persist regarding transparency and alignment with clinical reasoning and expert judgement.67

Legal and ethical considerations vary across primary, secondary and tertiary care

In primary care, where physicians oversee early diagnosis and timely referrals, concerns about AI-driven misdiagnosis or inappropriate recommendations—which could potentially compromise patient safety—are consistent with findings from other reviews.68 These concerns are echoed in studies highlighting the potential for AI-related diagnostic errors, which may prevent patients from receiving necessary care.69 Previous reviews have highlighted data privacy and liability issues as barriers to clinician adoption of AI in healthcare64 70 71; our review situates these concerns distinctly within secondary and tertiary care settings, thereby advancing previous literature. In secondary care, the integration of AI-driven diagnostic tools and reliance on electronic health records raise concerns about data breaches and unauthorised access, especially given the extensive use of advanced imaging and AI-supported clinical decision-making in specialised care. Research indicates that the increasing use of AI in medical subspecialties brings challenges related to data sharing and triangulation, heightening concerns about data privacy and security.72 In tertiary care, where high-risk, technology-intensive interventions such as AI-assisted precision medicine, robotic surgery and complex imaging analysis are more common, concerns shift towards liability and regulatory oversight. The ambiguity in attributing responsibility for AI-driven decisions is particularly pronounced in multidisciplinary settings, where multiple specialists contribute to patient management. The lack of clear medical liability regulations governing AI-assisted diagnostics and autonomous decision-making further exacerbates these concerns, leading to clinician hesitancy in fully integrating AI into high-stakes medical practice. This apprehension is underscored by the potential for clinicians to become ‘liability sinks’ for AI-related errors, assuming personal accountability for adverse outcomes even when the fault lies within the AI system or organisational processes.73

Clinician hesitancy varies across primary and tertiary care settings

In primary care, where routine visits and chronic disease management are central, AI’s ability to automate structured decision-making may raise concerns about role displacement, with clinicians fearing a diminished demand for human expertise. Conversely, in tertiary care, where clinicians manage high-risk, specialised and complex interventions, AI is not perceived as a job replacement threat but rather as a challenge to clinical autonomy and expertise. The integration of AI into diagnostics, treatment planning and procedural decision-making may lead to concerns about over-reliance on algorithmic outputs, erosion of critical thinking and clinician de-skilling. These findings contrast with one review that attributed primary care hesitancy mainly to fears of over-reliance on technology, potentially compromising clinical judgement.68 However, the lack of methodological clarity and AI application specificity in that review limits direct comparison. This underscores the need for further research to distinguish AI’s impact as a decision-support tool versus an automation mechanism across levels of healthcare.

Clinician involvement in secondary care

Previous reviews have extensively emphasised clinician involvement in the design, implementation and validation of AI systems as a crucial factor in fostering trust and adoption in healthcare settings.71 74 75 However, our review advances this literature by contextualising clinician involvement as a distinct facilitator of AI acceptance specifically within secondary care, where engagement in AI development enhances clinical integration.

Technical features in tertiary care settings

Likewise, while previous reviews have identified technical features—particularly concerns about design quality and interface interoperability—as barriers to AI adoption in healthcare broadly,75 76 our review uniquely situates these concerns within tertiary care, highlighting their specific impact in high-risk, specialised clinical environments.

The supplementary search conducted on 17 July 2024 identified 57 articles that would have otherwise been eligible for this study. Although these additional studies expanded existing evidence on AI applications in healthcare, especially within low-income and middle-income settings, many included findings relevant to multiple settings, with overlaps across resource-level boundaries. No distinct regional or healthcare setting-specific variations emerged regarding key determinants influencing clinician adoption of AI. Consistently, performance expectancy and perceived usefulness remained prominent. Moreover, these additional studies provided no new insights into moderators influencing clinician adoption. While some emerging factors not covered by UTAUT framework were framed differently in certain supplementary articles, their definitions still aligned with the main themes of AI conclusiveness, physician involvement, clinician hesitancies, legal and ethical considerations, relationship dynamics and technical features. Thus, although contributing valuable breadth, the supplementary search did not substantively alter the review’s original conclusions.

Implications for practice and policy

As this scoping review did not assess the quality of included studies, its practice and policy implications should be interpreted with caution. However, findings suggest that addressing AI conclusiveness across the level of healthcare and performance expectancy and facilitating conditions across all healthcare and economic levels contexts may support AI adoption. Efforts to enhance training, technical support and system interoperability could improve integration across settings. In primary care, AI tools should be designed to support, rather than replace, clinician–patient interactions, mitigating concerns about relationship dynamics and role displacement. In secondary care, strengthening clinician involvement in AI design and validation, along with improving AI transparency, may help build trust in AI-driven decision-making. In tertiary care, where concerns focus on clinical autonomy and algorithmic over-reliance, AI should function as augmented intelligence to complement specialist expertise rather than replace clinical judgement. Additionally, accountability and liability uncertainties in secondary and tertiary care emphasise the need for further discussions on regulatory clarity. While these insights offer direction for AI integration based on healthcare contexts, further evaluation is needed to inform formal policy and practice recommendations.

Strengths and limitations

This scoping review systematically examines patterns in the barriers to and facilitators of clinician acceptance and use of AI in healthcare, categorising these factors by both the type of AI healthcare applications and the income levels of the countries where the reviewed studies were conducted. This methodological approach provides insights into how diverse contexts shape clinicians’ experiences with AI technologies. The review also includes a systematic search strategy and predefined eligibility criteria to identify relevant studies, enhancing the rigour and replicability of the findings.

While the review provides valuable guidance, it has several methodological limitations. First, the search strategy was confined to two databases, excluded non-English studies, and included only peer-reviewed articles accessible through journals with open access or the library subscription of our home institution. This limitation may have excluded relevant studies from other databases or sources, such as grey literature or subscription-only journals, thereby narrowing the review’s scope. Such constraints could particularly affect the representation of studies from LMICs, where research may be less frequently indexed in widely used databases or published in open-access journals. In addition, the exclusion of non-English may have overlooked critical perspectives, particularly from LMICs, where research is often published in regional languages rather than English-language journals. This exclusion could have led to an under-representation of context-specific barriers and facilitators relevant to clinician AI acceptance in non-English-speaking regions. Given that only a few included studies explicitly examined AI adoption in LMICs, this limitation may have further contributed to the scarcity of LMIC-specific evidence in our findings. Consequently, the generalisability of the review’s conclusions regarding emerging drivers of clinician AI adoption in LMICs may be affected, underscoring the need for future research that incorporates non-English studies to capture a more comprehensive global perspective.

Second, some included conceptual reviews and studies lacked sufficient geographic details regarding their settings or participants. As a result, five studies were excluded from the trend analysis. Although the number of excluded studies is small, their absence may limit the comprehensiveness of geographic variability analyses. These methodological constraints highlight the importance of adopting a broader search strategy, including additional databases and grey literature, to improve inclusivity in future research.

Conclusion

This scoping review identifies critical gaps in understanding clinician acceptance and use of AI in healthcare. While prior research extensively explores the UTAUT constructs—performance expectancy, effort expectancy, social influence and facilitating conditions—the limited examination of moderators such as age, gender, experience and voluntariness constrains insight into individual-level determinants of AI adoption. The consistent prominence of performance expectancy and facilitating conditions across diverse levels and economic contexts of care, along with AI conclusiveness at different levels of care, suggests these factors serve as universal determinants of AI acceptance. This reflects clinicians’ confidence in AI’s efficiency and accuracy, as well as the necessity of training and support for its integration into clinical practice regardless of the context. However, the limited representation of LMICs in literature restricts understanding of context-specific influences, including policy, sociocultural and economic factors. While the supplementary search revealed a growing body of evidence from LMICs, further research is needed to fully capture these determinants. Moreover, this review underscores the need for a more comprehensive framework to address the complex interplay of factors shaping AI adoption in healthcare. Although UTAUT remains the most established model, it does not encompass emerging factors such as clinician hesitancy, involvement in AI design, relationship dynamics, ethical–legal considerations, AI conclusiveness and technical features. By demonstrating how these factors vary across primary, secondary and tertiary care, this review advances the literature and highlights the necessity of refining existing models or developing new theoretical frameworks. Future research should

  • Conduct systematic reviews and meta-analyses to rigorously assess universal determinants (eg, performance expectancy, facilitating conditions, AI conclusiveness) and their interactions across healthcare settings.

  • Undertake primary mixed-method studies in LMICs to investigate policy, sociocultural and economic drivers and their intersection with universal determinants.

  • Employ mixed-method research to refine or expand theoretical frameworks, integrating emerging factors such as clinician hesitancy, involvement in AI design, relationship dynamics, ethical–legal considerations, AI conclusiveness, and technical features.

Addressing these gaps will generate robust, context-sensitive evidence to inform strategies for effective and equitable AI adoption in healthcare worldwide.

Data availability statement

Data sharing not applicable as no datasets generated and/or analysed for this study.

Ethics statements

Patient consent for publication

Ethics approval

This scoping review involved synthesising data from publicly available sources and published literature, without any direct involvement of human participants or use of identifiable data. Therefore, ethical approval was not required.

Acknowledgments

Many thanks to current and former Professor Arias’ lab team members: Kendall Williams, Lillian Morgano and Benjamin Wills. This work was presented as oral presentation to the Graduate Conference for Research, Scholarship and Creativity 2023 and as a poster to the 2023 School of Public Health’s Research Conference of Georgia State University.

References

Footnotes

  • Contributors CEAS and JJA conceptualised, designed the study and interpreted the data. CEAS drafted the manuscript. CEAS and MAM ran searches through the databases. AF revised the first draft and read and approved the final manuscript before submission. CEAS, YW and MAM screened, assessed for eligibility, extracted and summarised the data. JJA reviewed and approved the selection process and solved all discrepancies; and substantively revised the manuscript. All authors read and approved the final manuscript before submission. CEAS is the guarantor.

  • Funding This scoping was a subproject of a parent project titled 'Natural Language Processing and Automated Speech Recognition to Identify Older Adults with Cognitive Impairment Supplement' funded by the National Institute on Aging (3R01AG066471-03S1 and 5R01AG080093-03). The funder did not play any role in this scoping review.

  • Competing interests None declared.

  • Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.