Article Text
Abstract
Objectives Medical artificial intelligence (AI) has been used widely applied in clinical field due to its convenience and innovation. However, several policy and regulatory issues such as credibility, sharing of responsibility and ethics have raised concerns in the use of AI. It is therefore necessary to understand the general public’s views on medical AI. Here, a meta-synthesis was conducted to analyse and summarise the public’s understanding of the application of AI in the healthcare field, to provide recommendations for future use and management of AI in medical practice.
Design This was a meta-synthesis of qualitative studies.
Method A search was performed on the following databases to identify studies published in English and Chinese: MEDLINE, CINAHL, Web of science, Cochrane library, Embase, PsycINFO, CNKI, Wanfang and VIP. The search was conducted from database inception to 25 December 2021. The meta-aggregation approach of JBI was used to summarise findings from qualitative studies, focusing on the public’s perception of the application of AI in healthcare.
Results Of the 5128 studies screened, 12 met the inclusion criteria, hence were incorporated into analysis. Three synthesised findings were used as the basis of our conclusions, including advantages of medical AI from the public’s perspective, ethical and legal concerns about medical AI from the public’s perspective, and public suggestions on the application of AI in medical field.
Conclusion Results showed that the public acknowledges the unique advantages and convenience of medical AI. Meanwhile, several concerns about the application of medical AI were observed, most of which involve ethical and legal issues. The standard application and reasonable supervision of medical AI is key to ensuring its effective utilisation. Based on the public’s perspective, this analysis provides insights and suggestions for health managers on how to implement and apply medical AI smoothly, while ensuring safety in healthcare practice.
PROSPERO registration number CRD42022315033.
- medical ethics
- information technology
- health policy
Data availability statement
All data relevant to the study are included in the article or uploaded as supplementary information.
This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.
Statistics from Altmetric.com
Strengths and limitations of this study
This meta-synthesis of qualitative studies was conducted in accordance with the Joanna Briggs Institute (JBI) methodology for meta-aggregation, and aimed to identify the public’s perception on the application of artificial intelligence (AI) in healthcare.
The JBI Qualitative Critical Appraisal Checklist was used to evaluate the quality of the included studies.
Synthesis of the included studies relied on the availability of direct quotes to the views or perceptions held by the public about the application of AI to healthcare.
A limitation of this study is that only publications in English and Chinese were included in this meta-synthesis, which may potentially cause language bias.
The participants in each study showed varied experience with medical AI, future studies should consider this as a variable to explore the perceptions towards medical AI among different participants.
Introduction
Artificial intelligence (AI) is currently one of the most controversial topics,1 especially since there is no consensus in its definition. Professor John McCarthy, one of the founders of AI, defines it as ‘the science and engineering of making intelligent machines’.2 In other monographs, AI was referred to as the development of computer algorithms to accomplish tasks traditionally associated with human intelligence, such as the ability to learn and solve problems.3 In recent years, AI has been increasingly applied in the field of medical and healthcare. For example, in radiology, with the help of big data and deep learning technologies, AI imaging applications improve the accuracy of diagnosis, and facilitate timely diagnoses.4 Another widely used AI system is the medical robots,5 and the advantages of the Da Vinci’s robotic surgery system in reducing intraoperative bleeding and shortening the operation time have been document.6 7 In addition, during the COVID-19 outbreak, the use of such aids as ultraviolet disinfectants and social robots was found to be effective in managing disease, treating patients and ensuring the safety of healthcare workers.8 AI can also be used in public health management, for instance, use of mobile health apps in the rehabilitation of patients with chronic diseases9 such as diabetes,10 and stroke.11 Moreover, some studies investigated the application of AI in diet,12 sports13 and emotional management.14 In fact, some scholars believe that AI is likely to reshape and reorient clinical medical practice in the next few years.15 Moreover, it is estimated that by 2026, the global expenditure on healthcare AI technologies will reach up to US$45 billion.16 Although the application of AI in healthcare has greatly improved disease diagnosis and management, compared with the application of AI in other industry, such as engineering of smart devices, its use in healthcare is still at its infancy, and its promotion and application still faces many uncertainties and challenges. According to Choudhury,17 these challenges may manifest evidently at the macro-level, technical level and individual level. At the macro-level, a recent survey of 265 clinicians actively practising in the USA revealed there are many regulatory and policy difficulties in the application of AI. The survey revealed that lack of AI accountability is a significant barrier to its adoption in healthcare.10 At the technical level, since the performance of healthcare AI systems depends heavily on the data they are trained on, AI integrations that do not address data quality issues could exacerbate biases in healthcare due to the biased data storage inventories that are in existence.12 For example, an algorithm that is mostly trained on Caucasian patients is not expected to have the same accuracy when applied to minorities.18 In addition, many developers for healthcare AI apps are not the end users. As such, developers primarily focus on AI’s analytic capabilities, accuracy, speed and data handling, with little attention to the human perspective,19 which limits the clinical utility of the designed apps. In fact, most AI tools that have shown good performance during development are impractical in clinical practice,20 and according to a survey published on the BBC in 2020, 80% of healthcare AI apps fail to meet the National Health Service standards.21 Challenges at the individual level included issues around the awareness and trust of individuals on AI.16 22 In his research, Choudhury17 derived a framework that focuses on the interaction between AI and clinicians. This framework can explain how interactions between clinicians and AI vary according to human factors such as expectations, workload, trust, cognitive variables related to absorptive capacity and bounded rationality, and concerns about patient safety. Moreover, as additional potential users of healthcare AI, the public’s attitudes, requirements and expectations towards the tool need to be explored. Here, the term ‘public’ refers to both patients and healthy individuals, because research on healthcare AI relies on large datasets, which should contain information from both patients who may benefit from the study, as well as people with no health conditions cannot benefit directly.23 Therefore, a comprehensive understanding of the public’s perspective can provide a more representative picture for future development of healthcare AI.24
To date, research on AI involves qualitative studies exploring the public’s awareness and views towards healthcare AI.25–27 However, results from a single qualitative study may not represent the public’s perception in a holistic manner. Accordingly, this study integrated several qualitative studies on the public’s perceptions and views on healthcare AI to provide guidance for the development of effective AI.
Methods
A meta-aggregation approach developed by the Joanna Briggs Institution (JBI) was used in this systematic review and qualitative meta-synthesis. The study was conducted between September 2021 and January 2022, according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses recommendations.28
Search strategy
The following three-step method was adopted in this review: first, an initial limited search was conducted on the Medline and CINAHL, after which a text word analysis of the title, abstract and index terms used to describe the articles was performed. A second extensive search was performed in the included databases (MEDLINE, CINAHL, Web of science, Cochrane library, Embase, PsycINFO, CNKI, Wanfang and VIP) using all the identified keywords and index terms. Lastly, the reference lists of all the identified reports and articles were searched to identify additional studies. Only studies published in English and Chinese were enrolled in this review, with no restriction for publication date. The search strings and titles extracted from each database are shown in the online supplemental file 1.
Supplemental material
Inclusion and exclusion criteria
The following were the inclusion criteria for the study:
Population: members of the public, regardless of age, gender, health status or history of medical AI use, etc.
Phenomenon of interest: the public’s perceptions about the use of AI in healthcare.
Setting: hospitals, homes or nursing homes, where healthcare AI was applied.
Design: qualitative or a mixed-methods study design.
Language: English or Chinese.
The exclusion criteria included:
Design: studies that did not use a qualitative approach.
Study types: conference papers, editorials, letters or general-comment articles.
Language: studies published in neither English nor Chinese.
Studies for which we could not get either the full text or the data collection and analysis methods were not reported.
Study section
The initially retrieved articles were imported into the Endnote X9 software, and repeated literature were removed. Two investigators (CW and XC) screened all the records independently and read the titles and abstracts to exclude literature that did not meet the inclusion criteria. The full texts were read to identify studies that could be included in the analysis. In the event of discrepant results, a third researcher (DB) was invited to join the discussion and reach a consensus.
Assessment of methodological quality
The methodological validity of the retrieved qualitative research papers was assessed by two reviewers using the JBI Qualitative Critical Appraisal Checklist, which contains 10 items to ensure the appropriateness of the methodological approach, the method application and the representation of the voice of participants in studies. Each criterion had three levels, that is, ‘yes’, ‘no’ and ‘unclear’, and papers with less than six ‘yes’ were excluded to ensure quality. Any disagreements between two reviewers were resolved through discussion, or a third reviewer was involved to reach a consensus.
Data extraction and synthesis
General characteristics of included studies were extracted to gain a better understanding of the literature which included author(s) name, regions, research objects, research methods, phenomena of interest and main research results. The texts labelled as results/findings, discussion/interpretation and conclusions by the original qualitative studies’ authors were extracted verbatim and entered into NVivo 2021 software. The JBI meta-aggregation aproach29 30 was used to extract and synthesise the data. The philosophical foundation of the meta-aggregation approach is pragmatism and Husserian transcendental phenomenology. The consistency of this approach with the philosophy pragmatism is reflected in its aim to produce comprehensive statements in the form of ‘lines of action’ to inform decision-making at the clinical or policy level.31 As a result, it avoids reinterpretation of original research results and moves beyond the generation of theories. All findings or themes were presented in the manner as they were in the original studies, without reinterpretation. Two reviewers (CW and DB) re-read each included study to ensure maximum familiarity with the data. Subsequently, a three-step process was adopted to synthesise qualitative findings. All the concluding findings from each included paper were extracted. The findings were then categorised based on similarity in meaning, with at least two findings per category. The categories were subjected to a meta-synthesis to form a comprehensive set of synthesised findings. For each finding, two reviewers independently assessed the degree of congruity between the findings and the supporting data, and a credibility score was provided for each finding as follows: unequivocal, credible, unsupported. ‘Unequivocal’ indicates the congruence of the finding and the supporting data were beyond a reasonable doubt, ‘credible’ means a clear association between them was lacking and ‘unsupported’ implies that the data did not support the findings. Only unequivocal and credible findings were included, unsupported findings were presented separately (there is no unsupported findings in this study).
Patient and public involvement statement
Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.
Results
A total of 12 papers were included in this study, including 5 grounded theory studies, 6 descriptive qualitative studies and 1 phenomenological study. Figure 1 shows the literature screening process and results.
Literature screening process and results using Preferred Reporting Items for Systematic Reviews and Meta-Analyses flow chart.
Study characteristics and quality of studies
The characteristics of the included literature are shown in table 1. All studies showed congruity between the research methodology and research questions, representation and analysis of data, data collection methods and results interpretation. Participants and their voices were adequately represented, and the conclusions were based on the data. Almost all studies (n=11) did not include statements regarding the cultural or theoretical perspectives of the researchers except the research conducted by McCradden. Furthermore, 10 studies did not address the influence of the researcher on the research nor the influence of the research on the researcher. Almost all studies (n=11) presented evidence of ethical approval by the respective body. Six studies showed unclear congruity between the stated philosophical perspective and the research methodology. Results of the quality assessment are presented in table 2.
Study characteristics
Quality assessment of included studies
Meta-aggregation
A total of 39 findings rated as ‘unequivocal’ or ‘credible’ were extracted from 12 studies included in the synthesis. The 39 findings were aggregated into 12 categories, which were subsequently classified into 3 synthesised findings. Figure 2 shows the summary of study findings, categories and synthesised findings on public perceptions on the application of AI in healthcare.
Meta-synthesis findings of the general public’s perceptions on the application of artificial intelligence (AI) in healthcare. C, credible; U, unequivocal.
Synthesised finding 1: advantages of medical AI from the public’s perspective
The first theme integrated from the included studies was that, to the public eye, medical AI has several advantages. For instance, AI has large data storage capacity, remarkable efficiency and it can help monitor and promote health in real time.
Category 1: AI has the large data storage capacity advantage
The public described the role of AI’s huge data storage advantage in meeting their medical needs. According to most individuals, the AI system can be used to seek more personalised and actionable information. Through the medical AI system, more medical information that is easy to understand can be obtained, and comparison of medical information or data can be realised to provide more evidence-based suggestions. Additionally, the public could get a second opinion besides their care providers. The large amount of medical data possessed by AI also becomes an important aid to making accurate diagnoses. In the eyes of the public, healthcare AI is more intelligent, and can use more information to make a proper diagnosis. Two exemplar quotes follow:
I mean, it’s (AI) not a human. It’s got more data, so probably. … [I]t probably has more intelligence; it just has more information to work with to try to come up with a proper diagnosis. … I don’t think you will cure a lot of diseases without that advanced intellect.32
Exactly, with such a report you could go to another dentist and get a second opinion. This would be fantastic, right.33
Category 2: AI is remarkably efficient
High efficiency is considered one of the outstanding advantages of AI technology applied to the healthcare. According to most members of the public, healthcare AI can improve the efficiency of medical tasks, such as imaging scans, thereby reducing the waiting time. In addition, AI can process massive amounts of data to detect possible abnormalities in time to speed up diagnosis and treatment, hence preventing deterioration caused by disease. Two exemplar quotes follow:
When you can reach out and have a sample size of a group of ten million people and to be able to extract data from that … a team of researchers can’t do that. You need AI.34
If the app says, ‘You probably have melanoma—go see your doctor,’ they might actually get in there sooner…so it could be lifesaving.35
Category 3: AI helps monitor and promote health in real time
In the eyes of the public, medical AI can continuously track and collect health data to help in understanding the health status of users, find potential health problems in time and provide corresponding suggestions. The data collected by medical AI can also provide a basis for physicians to make medical decisions. Moreover, healthcare AI was perceived as a useful tool to help individuals prepare for clinical visits. Specifically, it can provide reliable information that individuals can research on and construct relevant questions prior to the consultation. Therefore, by doing so, people can be more prepared for consultations with their care providers. Two exemplar quotes follow:
I would use it (healthcare AI) because I think the more information you can give to your doctor, the better off he/she’s going to be when it comes to treating something that you might have, whether it’s a frailty or whatever, and if things like this can help improve the quality of people’s lives as we age, then I think it’s a good thing.34
Maybe give a user questions that they can ask the doctor, because that’s the other thing I noticed, is that a lot of people don’t get the results they want, or the medical outcomes, because they don’t know what questions to ask the doctor. ……But if AI could be like, ‘Hey, here is your results, do you feel this? Or do you have problems breathing? Or so on and so forth, and if you do, please bring this up with your doctor.’ My stepmother works in the ER and she’s an RN [Registered Nurse]. And she’s like, ‘Half the time when people come in, if they were just able to ask the right questions, they would be in and out, they’d start treatment immediately.’36
Synthesised finding 2: ethical and legal concerns about medical AI from the public’s perspective
Most studies mentioned the public’s concerns about ethical and legal issues surrounding the application of medical AI. First, people expressed concerns about the reliability of medical AI, as most of them had no knowledge on how the AI system works. Second, the public expressed concerns about data ethics in medical AI. Third, the responsibilities and rights of different parties during the application of medical AI are currently not clear. In addition, some people believed that the use of medical AI will affect communication between people. Some members of the public were also worried that too much reliance on AI technology will affect the performance of medical staff. Finally, the public raised concerns over the cost of medical AI.
Category 4: concerns about the reliability of AI
The public had doubts about the accuracy and reliability of health data recorded by AI. AI algorithms have black box properties, for the public, the process by which medical AI makes decisions through calculations is opaque and difficult to understand. This lack of transparency puts the credibility of medical AI into question. In addition, the public was worried that AI could exacerbate biases that could arise from an inherently biased learning dataset or by developers inadvertently incorporating their biases into AI algorithms. Moreover, some people reported finding errors in their health records, and did not know if medical staff could detect and fix errors in the AI platforms in use. Two exemplar quotes follow:
I would need proof that it works and what you’re actually getting is meaningful information. Like it’s not just some crap. If it’s going to make recommendations to me, I want them to be proven that they’re actually legit.36
So I’ve had a lot of different things in my medical chart that are inaccurate, very inaccurate, so if they’re training an artificial intelligence that this is facts, it’s like, well no.37
Category 5: concerns about data security and privacy protection
Data security and privacy is a major concern for the public in terms of data ethics. Therefore, the public’s main concern is whether medical AI systems contain confidentiality features and whether they can protect sensitive health information from potential hacking or data leakage. Another concern is that health data provided for a medical AI could be sold or used for other purposes that most people disagreed with. In addition, some members of the public expressed concerns about medical apps sharing personal data for disease diagnosis. Moreover, some devices with monitoring functions also made most people feel that their privacy is violated. Two exemplar quotes follow:
There is always a possibility of hackers taking over telemedicine platforms and causing data theft. Apart from that, when there are security lapses, the possibility of stealing vital bank information from the mobile (that is used for accessing the mental health service) is also possible…38
Are they going to take my information, are they going to sell it? So, it kind of makes you scared when other companies are buying it.26
Category 6: concerns about the responsibilities and rights associated with the application of medical AI
The public was unsure whether the data collected by AI belonged to the patient alone, and the level of access that could be granted to developers or service providers. At the same time, people had concerns over who could be held responsible for errors made by medical AI. In addition, some members of the public were worried that low-quality AI products may come up when there is insufficient supervision, hence harming the interests of users. Two exemplar quotes follow:
Several legal issues are yet to be clarified…for instance, if there is a misdiagnosis or missed diagnosis…who will the patient sue…. Doctor? Developer? Platform owners?38
I have some background in electronics… The way things are made, ‘cause I’ve actually worked in the industry of making medical equipment, it’s all about using the cheapest method to get the end result. Well, electronics fail. They just do.35
Category 7: concerns about communication being affected by AI
From the public perspective, their medical needs can only be met if someone understands what they are expressing. They argue that the AI machines’ depersonalised procedures, in which patients become numbers, they may be treated in an indiscriminate manner. Similarly, AI cannot understand patients’ emotions during communication, and thus the responses provided by AI are considered depersonalised and dehumanising. In addition, patients believe AI has a negative impact on interpersonal communication because people do not relate to each other under the atmosphere of AI, therefore communication with medical AI may be inefficient, both to the patients and doctors who prefer face-to-face communication. Two exemplar quotes follow:
Emotionally, a robot would not appeal to me. It can be nice and say nice things, but I would have emotional difficulties with it.39
I don’t find it very appropriate. First of all, it’s going to take jobs away from health professionals. If the app has to tell them, suggest things or whatever, there’s no communication there, like face-to face.26
Category 8: concerns about the over-reliance of healthcare workers on medical AI
Although the public acknowledges that medical AI can help medical staff become more efficient, they raised concerns that doctors may get used to using AI technology to process all information, which will affect their basic abilities, such as reading. This will imply that without access to these AI tools, high-quality care may not be provided. In addition, people believe that over-reliance on AI programmes or algorithms will reduce the insight of medical staff, which may mean they lose some soft skills or even cannot work without it. These concerns indicate the public’s perception about the role of AI in medical practice, preferring that AI should only be used as an auxiliary tool. Two exemplar quotes follow:
If they were to get hacked or a system goes down … like what’s the contingency plan, but what is the contingency plan? If you have all these doctors who are so used to having this artificial intelligence read all these, and they don’t have the skill of reading it, then what happens?32
So that’s a concern, that you lose some of those soft skills and that relies on intuition when you rely solely on AI, on computers and programs and algorithms.26
Category 9: concerns about economic impact
The public expressed their concerns about the potential financial burden of medical AI, with many fearing it may increase healthcare costs which will be passed on to patients. In their opinion, AI is expensive to develop and deploy. Second, they worry about the impact of AI recommendations on the types of treatments covered by insurance, for example, AI may recommend a treatment which most patients cannot afford. In addition, AI equipment needs equipment, network and other hardware guarantees, low-income groups may not afford and this may exacerbate inequalities in healthcare. Two exemplar quotes follow:
Robotic surgery is new, I don’t know the reimbursement policy or how much insurance will cover it. If the cost is too much for me personally, then I can’t afford it.40
All these devices, technology, AI, etc., require high-speed internet … patients who have basic livelihood issues cannot afford a device or internet.38
Synthesised finding 3: public suggestions on the application of medical AI
The public has views on the application scenarios for medical AI, conditions that can facilitate the application of AI. They suggested that medical AI should first meet the individual needs and respect the autonomy of the public. In addition, medical AI should be transparent and credible, as well as properly regulated. Finally, AI should only be used as an auxiliary tool in medical practice, not as a decision maker.
Category 10: meet the individual needs and respect their autonomy
The public indicated that medical AI should fully consider users’ specific needs; they considered providing personalised information is a key feature of AI. Also, medical AI should be usable by all ages, whether they are tech-savvy or not, and older people may need easier modes of AI interaction. Some argued that medical AI will be more acceptable if it can provide more functionality while performing its core functions. In addition, they indicated that medical AI should only provide risk levels but not a definite diagnosis, and when medical AI makes a recommendation, it should be up to the users to decide whether or not to follow the recommendation, rather than forcing them to follow it. For example, when an app makes a recommendation to see a doctor, the recommendation should not be binding, nor should it take away the user’s freedom to see a doctor. Two exemplar quotes follow:
User-friendliness is an important precondition if you want to entice people to use it (mobile health (mHealth) apps).37
I would like her [the SAR ‘Alice’] (robot) in my environment … For when something has been spilled and she cleans it up and other things … But I decide when she meddles with me.39
Category 11: improve the transparency and credibility of medical AI
The public will be more receptive to medical AI technology and its related research if there is transparency about how data are used in health AI. Moreover, some people expressed the need to understand how AI systems generate medical information so that they can decide whether to trust advice provided by AI. Another approach for increasing the credibility of medical AI is to disclose its information sources. In addition, the people also stressed the need to have proper supervision and management of medical AI, and endorsement by healthcare providers and government regulators may also increase public acceptance of AI. Two exemplar quotes follow:
My level of trust would depend on the source naturally. If it’s from Joe down the street, obviously I wouldn’t be too crazy about it. But if it’s from a trusted source, like a well-respected medical organization or something like that, like John Hopkins or Mayo Clinic, that would probably help build a little bit of trust.36
If you would also give it approval because of a ministry or because of a legal regulation or something like that, this guarantee should be legal. The responsibility lies with the government with regard to its quality.37
Category 12: use AI as an auxiliary tool in medical practice, not as a decision maker
The public held the view that the human element should not be removed from the healthcare process, thus, medical AI should only be a complementary service, not a replacement for professional health forces, and the final decision should be made by real people, the users of AI (doctor, nurse, patient, etc). The public also mentioned that the information provided by AI should be for reference only, not for determination of patient treatment. Finally, they hope medical AI could be equipped with assistive functions in order to find more detailed information apart from what they what to know mainly. Two exemplar quotes follow:
As long as it’s a tool, like the doctor uses the tool and the doctor makes the call. As long as the doctor is making the call, and it’s not a computer telling the doctor what to do.26
They report that they would like to receive results not only of findings based on the questions of the referring physician (ie, the primary aims of the scans) but also of incidental or unrequested findings that can be extracted from the scan.34
Discussion
This meta-synthesis concluded the public’s attitudes and perceptions towards medical AI. Twelve qualitative studies were included in the present research, resulting in 39 findings, that were summarised into 12 categories and further generalised into 3 synthesised findings. The analysis revealed that while the public acknowledges the convenience and benefits of medical AI, there are many concerns about its implementation, such as personal privacy, data security and regulation. In addition, members of the public gave their opinions on how to increase the credibility and acceptability of AI. These findings provide important insights that can be used as a reference for future research, development and application of medical AI.
Understanding how medical AI works will help improve its acceptability
AI is already widely used in healthcare, and the studies included in this analysis involved the use of AI in such aspects as disease screening, diagnosis, risk warning, adjuvant therapy and intelligent healthcare. In addition, AI is increasingly being applied in the research and development of new drugs,41 as well as in the prevention and treatment of COVID-19.42 With the accumulation of massive medical data and the improvement of hardware computing capacity, medical AI has built a data-driven deep learning system.43 In this way, it can meet the public’s medical and health needs more efficiently and with high quality in many aspects of healthcare. The present results show that the public fully recognises the advantages of medical AI. However, two types AI technologies used in healthcare, machine learning (ML) and deep learning (DL) have black-box attributes, in the sense that they cannot explain how predictions are made based on the two technologies.44 45 As a result, users are unable to understand the prediction process and verify the results given by ML or DL models, leading to low public acceptance of medical AI.46 In the several original studies included in this paper, the public expressed doubts about the effectiveness and accuracy of medical AI.32 35 37 Therefore, overcoming the black-box problem and helping the public to understand how models work and perform predictions, is an important aspect for the evolution of medical AI. This challenge could be solved through explainable artificial intelligence (XAI), defined as a set of features that explain how the AI model constructs its predictions.47 For example, in a study involving categorising tuberculosis diagnoses through deep learning chest radiographs, researchers used heat maps to show areas of increased activation of deep learning networks that could be inferred to be important for diagnosis.48 Therefore, by adding the XAI technology to ML and DL models, the use of AI in healthcare will become more reliable and acceptable.49–51 In addition, before application, the public should be educated on the principles of the medical AI system, including how it works.
A safe and healthy AI application environment is crucial
Literature review and the results from this study indicate that the public has concerns over medical AI, including those pertaining to security, privacy protection, responsibility attribution and reimbursement of medical expenses, all of which are related to improper policy and regulatory systems.52 Regarding medical security, AI systems can cause medical security accidents due to malicious attacks by hackers,53 system loopholes,54 algorithm differences55 and other factors that may threaten the safety of patient lives. With regard to privacy protection, the development of medical AI requires collection of a wide range of health data,56 resulting in varying degrees of security risks to the public in terms of physical, information and the right to decision-making privacy. According to a previous study, 59.72% of the public was concerned about the privacy disclosure during the application of medical AI.57 Personal privacy information may be obtained, spread and used by unauthorised individuals, through network breaches, resulting in the violation of personal privacy. Some information derived from AI learning and analysis has also become one of the most important ways of privacy violation.58 At the same time, the emergence of AI has created a fuzzy zone between academic research and clinical application, making the public wary of the exchange of their private information between commercial and non-commercial platforms. Notably, in a study of 4000 American adults, only 11% were willing to share health data with tech companies, vs 72% with physicians.59 In terms of rights and responsibilities, public health data are an important basis for AI, but the ownership of data management has always been controversial. Conflicts of interest between data source subjects and data processors continue to exist, and ways to guarantee informed consent from the public in the process of using medical data need to be established. When AI poses a threat to public medical security or causes an accident, the definition of the subject of responsibility is still unclear. There is currently no consensus whether responsibility in the event of accidents should be taken by medical staff, AI producers or AI itself. Regarding expenses, the operation of medical AI often requires the support of expensive equipment, network and other hardware or software facilities. This, coupled with the currently unclear insurance reimbursement system for medical AI expenses, may increase the financial burden on the public from the use of medical AI.
In summary, the establishment and improvement of medical AI policy and regulation systems is key to enhancing its promotion and application. Most importantly, to maximise the protection of public health and safety, a quality evaluation system for medical AI should be formulated, and its acceptance criteria and regulatory system should be improved to enhance its service and protective performance. Second, the management of private information such as medical data should be improved to ensure privacy and security of public information during the whole process of development, application, and destruction of medical AI. Third, to avoid adverse events and improve the public’s trust in medical AI, the responsibility supervision system and rights protection mechanism should be established and improved, and the rights and responsibilities of medical AI should be clarified. Finally, regulations should be formulated to reasonably control the costs associated with medical AI and improve the insurance reimbursement system to address people’s economic concerns.
The public expects ‘people-oriented’ medical AI
In this analysis, ethical issues such as social problems, excessive reliance on AI and the role of AI have also attracted wide attention from the public. While medical AI has broadened the channels of communication between the public and healthcare workers, it also faces problems such as conflicting medical advice. Information asymmetry leads to public distrust in medical staff, and makes the public anxious and worried about their own health conditions. In addition, the AI products in current use are basically programmed mechanical devices, which may lead to the absence of humanised therapies.60 61 The use of medical AI may also deprive the public of autonomy and weaken emotional support among people. This problem is particularly evident in the application of AI in caring for the elderly39 and in psychotherapy.62 Moreover, members of the public believe that both they and medical staff are overdependent on AI, and there is a risk that their skills and knowledge may be deprived by AI.
The aforementioned concerns suggest that the role of medical AI in healthcare is still not clearly defined. Furthermore, the public hold the idea that AI should only serve as an auxiliary tool. Therefore, the concept of ‘people-oriented’ and the corresponding ethical principles should be implemented throughout the application of medical AI. Additionally, research, development and application of medical AI should be patient-centred and follow the medical ethical principles of ‘putting patients’ interests first, respecting patients and being fair’. As medical AI is becoming increasingly popular, various fields have made attempts to strengthen its ethical governance. For example, in the fields of nuclear medicine and molecular imaging, 16 ethical principles have been proposed to guide the development and implementation of AI.63 Such include ‘common good and benefit’, ‘first do no harm’ and ‘patient safety and quality of care’. In summary, ethical issues should be considered during the development of medical AI to ensure maximum benefit to the well-being of humans.
Limitations and future directions
Although this meta-synthesis adopted a rigorous design and complied with the meta-aggregation approach of JBI, several limitations were observed. First, only studies published in English and Chinese were included, which may cause language bias. Besides, the participants of each study had different experience in application of medical AI. Specifically, of the 12 included studies, 623 26 33 34 38 64 did not specify whether interviewees had experience with the application of medical AI, 232 39 reported that respondents had no experience with using medical AI, 236 40 reported that respondents had used the medical AI technology and the other 235 37 had both experienced and inexperienced respondents. Since participants’ perception of medical AI may be affected by their experiences with it, future research should consider experiences as a variable, and compare differences in perceptions of various respondents and possible reasons, to arrive at richer and stronger conclusions.
Clinical implications for health managers and policymakers
According to this meta-integration, one of the main concerns for the public was the right to informed consent. Therefore, medical institutions should establish management systems to guide the use of AI, to guarantee the right of informed consent to the public, especially for institutions which have their own data infrastructure. Second, health institutions should fully understand the performance of their medical AI platforms, clarify their role in the process of diagnosis and treatment, avoid over-reliance by medical staff on medical AI and ensure the safety of treatment.
Conclusions
This meta-synthesis study reveals that from the public perspective, medical AI has greatly improved modern medical and healthcare, but also brought many social ethical issues and challenges. This study also puts forward suggestions to promote the application of medical AI from the perspective of the public. As one of the important component of the healthcare system, the public’s perception of the advantages of medical AI is an important driving force to promote its development. Meanwhile, the public’s concerns about the application of medical AI should be deeply concerned, and it should be used as a reference perspective for the development, operation and management of medical AI to promote its continuous application. We should strengthen the management of AI from both legal governance and ethical constraints, minimising or eliminating its disadvantages and maximise its advantages while maintaining the social values of security, fairness and justice.
Data availability statement
All data relevant to the study are included in the article or uploaded as supplementary information.
Ethics statements
Patient consent for publication
Ethics approval
Not applicable.
References
Supplementary materials
Supplementary Data
This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.
Footnotes
CW, HX and DB contributed equally.
Contributors CW conceived the study idea, participated in study design and method development, screened titles, abstracts and full-text articles, carried out the data extraction and quality appraisal of included articles, coded the extracted findings and performed the data synthesis. CW wrote the manuscript. DB independently screened the titles, abstracts and full texts of the retrieved articles to ensure that they met the inclusion criteria, and contributed to the writing of the manuscript. XC applied for regulatory approval, independently extracted data from the included articles, evaluated their qualities and coded the extracted results, contributed to the final synthesis of the data. JG was involved in designing the study, developing the methods, contributing to the synthesis of the extracted findings and monitoring article quality. XJ was involved in study design, research method development and monitoring article quality. XJ was responsible for the overall content as the guarantor.HX contributed to the writing of the subsequent revision of the manuscript and enhanced the English quality of this article. All authors read and approved the final manuscript.
Funding This work was supported by Medical Technology Project of Health Commission of Sichuan Province (21PJ109).
Competing interests None declared.
Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.
Provenance and peer review Not commissioned; externally peer reviewed.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.