Article Text
Abstract
Objective To identify barriers to hospital participation in controlled cluster trials of clinical decision support (CDS) and potential strategies for addressing barriers.
Design Qualitative descriptive design comprising semistructured interviews.
Setting Five hospitals in New South Wales and one hospital in Queensland, Australia.
Participants Senior hospital staff, including department directors, chief information officers and those working in health informatics teams.
Results 20 senior hospital staff took part. Barriers to hospital-level recruitment primarily related to perceptions of risk associated with not implementing CDS as a control site. Perceived risks included reductions in patient safety, reputational risk and increased likelihood that benefits would not be achieved following electronic medical record (EMR) implementation without CDS alerts in place. Senior staff recommended clear communication of trial information to all relevant stakeholders as a key strategy for boosting hospital-level participation in trials.
Conclusion Hospital participation in controlled cluster trials of CDS is hindered by perceptions that adopting an EMR without CDS is risky for both patients and organisations. The improvements in safety expected to follow CDS implementation makes it challenging and counterintuitive for hospitals to implement EMR without incorporating CDS alerts for the purposes of a research trial. To counteract these barriers, clear communication regarding the evidence base and rationale for a controlled trial is needed.
- QUALITATIVE RESEARCH
- Hospitals
- Health informatics
Data availability statement
No data are available. The qualitative dataset generated and analysed during the current study is not publicly available.
This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.
Statistics from Altmetric.com
STRENGTHS AND LIMITATIONS OF THIS STUDY
This was a multisite study, with data collected across six Australian hospitals.
Data analysis was completed independently by three researchers who came together to review codes, discuss discrepancies and reach a consensus on key themes for reporting.
Although purposive sampling was used, and all participants were senior staff, not all participants were actively involved in their organisation’s decision to participate in the trial of clinical decision support, so barriers and strategies identified may not be exhaustive.
Introduction
Controlled trials, where one cohort is exposed to an intervention and another cohort is not, are viewed as essential for determining effectiveness of an intervention. In trials of hospital-wide digital health interventions, such as electronic medical records (EMRs), delivery of the intervention to selected individuals or groups, such as clinicians or patients, risks contamination and is practicably difficult, so intervention delivery is typically at the site or hospital level.1 In controlled cluster trials, individual participants are not recruited or consented but selected hospitals adopt a digital health intervention, while others refrain from or delay implementing the intervention during the data collection period.2 3
Clinical decision support (CDS) alerts are viewed as a key safety feature of EMRs.4 To date, however, few controlled trials have been undertaken to assess the effectiveness of alerts,5 6 and as a result, organisations have limited robust evidence to guide alert selection. For example, drug–drug interaction (DDI) alerts, which trigger at the point of medication order entry to warn prescribers of potentially dangerous drug combinations, are frequently used,7 8 yet no controlled trials have examined the effectiveness of DDI alert sets (eg, ‘severe’ or ‘moderate’ DDI alerts) to reduce medication errors and patient harms.9 10
Our non-randomised controlled pre–post trial of DDI alerts in EMRs aimed to fill this evidence gap by comparing rates of DDIs and associated patient harms before and after implementation of an EMR, with (intervention) or without (control) DDI alerts.11 We attempted to recruit six hospitals into our trial but encountered significant challenges in recruiting control hospitals. Hospitals were receptive to participating in a trial but were opposed to implementing their EMR system without DDI alerts in place, despite the limited evidence available on DDI alert effectiveness.
Although there is considerable research exploring challenges associated with individual participant recruitment into trials,12–16 much less is known about barriers and facilitators to site-level recruitment into controlled trials, and no previous research has examined site-level recruitment into trials of CDS. In the few studies that have examined hospital recruitment, most barriers identified relate to challenges associated with intervention delivery. For example, in a study that attempted to recruit nearly 100 hospitals for a pragmatic cluster randomised trial of postacute stroke services, the primary reason for hospitals declining to participate was insufficient staff or financial resources to deliver the intervention.17 However, for trials of CDS, such as DDI alerts, this factor is unlikely to be a barrier, as intervention delivery typically consists of ‘turning on’ the CDS functionality in an EMR with some additional clinician training on its use.
In this study, we aimed to identify barriers to site-level participation in controlled cluster trials of CDS, and potential strategies for addressing these barriers. Based on the challenges we encountered in recruiting control sites for our trial, we expected to identify some unique recruitment challenges for trials of CDS. With the rapid acceleration of CDS implementation in hospitals, and digital health interventions more broadly, we hoped our findings would be of value to others attempting to generate a robust evidence base for these important interventions.
Methods
Design
This study used a qualitative descriptive design.
Setting
Study sites included five hospitals in New South Wales (NSW) and one hospital in Queensland (QLD), Australia (see online supplemental appendix 1). These sites were initially approached to participate in our trial of DDI alerts.11 At the time interviews were conducted, two hospitals had no DDI alerts in place, and four had DDI alerts operational in their EMRs. The number of DDI alerts that were available varied between sites.
Supplemental material
Recruitment and participants
Senior hospital staff from the six hospitals were purposively approached to take part in a qualitative interview to explore their views on evaluation of CDS alerts. Interviews formed part of our larger project focused on determining effectiveness of CDS alerts in EMR systems11 and were conducted after the commencement of our cluster trial but before trial completion. The sample included department directors, chief information officers and those working in health informatics teams. A snowball recruitment approach was also used to identify participants, where interviewees recommended additional colleagues to be interviewed. All participants provided written informed consent prior to commencing the interview. Participation was voluntary and no compensation was provided.
The Standards for Reporting Qualitative Research checklist18 was used to guide manuscript preparation.
Patient involvement
A patient was involved in the conduct of this research. A member of the public joined our project steering committee during the early stages of our trial and provided input on study design, outcomes and dissemination opportunities for patients.
Data collection
As data collection occurred during COVID-19 pandemic restrictions and across multiple states in Australia, semistructured interviews were conducted via videoconference. The interviewer was a human factors researcher with expertise in CDS evaluation and qualitative research (MTB). The interviewer was independent (ie, not employed or affiliated) from all study hospitals. Interviews comprised two parts: questions related to (1) recruitment of hospitals into trials of CDS and (2) evidence-based decision-making for selection and implementation of CDS and digital health interventions in general. Findings from the latter component were published previously,19 and the former component is the focus of the current paper. The interview guide for component 1 appears in online supplemental appendix 2. Participants were initially asked how and why their hospital decided to participate in the controlled trial of DDI alerts and then to reflect on barriers and facilitators to hospital participation in CDS trials more broadly.
Data analysis
Interviews were audiorecorded and transcribed. A general inductive content analysis approach was used to identify themes from deidentified transcripts.20 Two researchers experienced in qualitative research and HIT evaluation (MTB and BAVD) initially coded three transcripts independently, then came together to compare themes and agree on a coding framework for analysis. The remaining interviews were then independently coded by three researchers (MTB, BAVD and KS) using the framework. The three researchers came together to review codes, discuss discrepancies and agree on key themes for reporting. Any disagreements in themes identified were resolved via a discussion process. Data collection and analysis continued until inductive thematic saturation was achieved.21
Results
In total, 34 potential participants were invited to take part in an interview and 20 participants agreed. This included 5 from QLD and 15 from NSW hospitals. Participants were chief information officers (n=2), directors of pharmacy, nursing or clinical pharmacology (n=7), EMR system implementation leads (n=4), director of clinical governance (n=1), directors of medical services (n=3) and chairs of relevant committees/councils (n=3). Each interview ran for an average of 30 min (range 17–55 min). Despite the range of expertise of participants, we found no major differences in the views expressed, so results are presented for all stakeholder groups together.
All senior hospital staff recognised that a key benefit of a controlled trial was the generation of evidence on DDI alert effectiveness. However, participants identified a number of barriers or challenges associated with participating in the research as a control site. As shown in table 1, participants also proposed several strategies for addressing these barriers.
Barriers to site-level participation as a control site in controlled trials of CDS and potential strategies
Barriers to participation as a control site
Risk of patient harm
The most frequently reported barrier was the potential risk to patients as a consequence of having the DDI alerts turned off at control hospitals.
You’re at risk of nasty outcomes… there’s been some deaths because of interactions…I was involved in a statin-voriconazole death… an alert would have stopped that…if they read it. (site 2, participant 2)
People explained that this risk to patients was primarily because doctors would assume that CDS alerts were operational, and so would not double check for DDIs. Some participants reported that end-users’ awareness of what alerts were operational in their EMR was poor, and this created a false sense of security.
My concern about the system is always that the clinicians always assume that the system will tell them when they’re doing something wrong, and that it will inform them if there’s a problem. If they see any alert at all, then they know that the system’s watching out for them in some way. Their understanding of how much it watches out for them, obviously, is completely inaccurate. And that’s probably the biggest risk that I see of anything in the system, is that false sense of security. (site 1, participant 1)
With a transient workforce, participants were concerned that prescribers relocating from other districts would assume that DDI alerts were operational at control hospitals, as most other hospitals in Australia that used an EMR have DDI alerts.
We have medical staff come from every other LHD [local health district] and work here… So potentially that is a risk that they're thinking something’s going to happen, but it’s not going to happen in the system. (site 4, participant 1)
Reputational risk
Some participants explained that executive teams had a lower tolerance for risk than frontline staff, and this included tolerance for both patient risk and reputational risk.
People are really cautious, particularly higher ups who don't do clinical work so much and don't use the system so much…they tend to be very cautious around these things…and want every safeguard possible. Because…they don't want their system to be the one that caused harm to a patient. (site 2, participant 4).
Ethical and legal ramifications
Most participants were concerned about the legal and ethical ramifications of participating in the trial as a control site.
I guess it might be perceived as a bit of an ethical issue with… being in control site and not having that intervention. (site 6, participant 2)
Participants referred to CDS alerts as a safety feature or intervention, and most assumed that alerts were effective in reducing patient harms. Interestingly, some participants alluded to alerts not being liked by end-users and many were aware of prescribers experiencing alert fatigue. Despite this, some senior hospital staff viewed removing DDI alerts as wrong because it constituted removing an effective safety intervention.
What if the media got hold of this if someone was harmed, and they sued the hospital and it came out that we were participating in a trial, and elected not to turn on this safety feature… And we would not, they felt we would not have, I think that we wouldn't have a leg to stand on because we have got the functionality built in, but we didn't turn it on. (site 1, participant 4)
Look, I think one of the things that people look at is that they realize that even though they may not like it, the alert functionality is actually safer…if I turn that off, I've actually created an environment that’s actually going to put us back to causing harm… It’s a bit like turning all the features off on your car…so I take all the safety features off. I take all the beeps and all the other bits off the cameras. Why would I do that? (site 2, participant 5).
Risk of not demonstrating benefits from EMR
A small number of participants also raised political barriers, explaining that there was significant political pressure to show benefits from EMR and CDS. Participants perceived that acting as a control site in a trial, and not turning on DDI alerts, could result in benefits not being demonstrated from EMR implementation at that particular site.
There was huge political pressure to show benefit… Because of, you know, it’s almost a billion dollars of investment that’s gone. So I think that’s, that'd be number one… they really want to see results for their investment (site 2, participant 3)
Allure of new technology
A frequently raised barrier was the attraction of new technology, with participants explaining that hospitals are often waiting for these ‘brand new toys’ so a decision to delay implementation for the purposes of a controlled trial would not be supported by executive and front-line staff.
Everyone wants the latest tech, part of the allure of digital. (site 2, participant 1)
Strategies to boost hospital participation
Effective communication
The mostly frequent strategy reported by participants to facilitate hospital recruitment as a control site was effective communication of trial information to the site, particularly to front-line clinicians, as end-users of CDS. Most senior staff who were also clinicians were not consulted as part of their site’s decision to participate in the research, and this was viewed as highly problematic.
We needed much more engagement from all our clinicians… And indeed, not only communication, it’s actually that they're part of the decision making, to be engaged in such a project, which would mean that demonstrating the potential benefit would be very important. (site 3, participant 3)
I think, ultimately, the executives need to make the decision, but you definitely need input from the end users. Because as we all understand, the executives don't necessarily use the system and know how it impacts their workflow. So you would need advice or guidance from your end users. (site 5, participant 2)
To avoid any misunderstanding, participants suggested that researchers directly communicate with clinicians about the trial, rather than information being delivered to front-line staff from executive teams. Trial information should include clear background and the rationale for the study, so that all stakeholders understand the current evidence base and why a controlled trial is needed.
I think just kind of that discussion around “look, guys, you know how irritating these [alerts] are, we're not sure they're safe, so this is what we're thinking, and this is how we're going to measure it”. (site 3, participant 2)
Once I understood that basically there was equipoise between the two, that it probably didn't really matter which was the control arm and which was the treatment arm. (site 3, participant 5)
Highlight value of participation
Some participants explained that it was advantageous for hospitals to be seen as participating in research, particularly novel or ground-breaking research, so this could be used as an argument to facilitate recruitment.
The chief executive plus the rest of executive were very much driven around that this was a good opportunity to be involved in some, at that point of time, cutting edge research to actually help prove some of the value around what digitalization brings. (site 2, participant 5)
Extra safety measures
Finally, a small number of participants proposed that extra safety measures could be introduced to reduce risks associated with being a control site. With respect to CDS alerts, participants suggested interventions like passive CDS tools (eg, DDI interaction checkers) and turning DDI alerts on in the background so that DDIs could be monitored by researchers or pharmacists without alerts being visible to prescriber end-users.
Discussion
This qualitative study revealed that hospital participation in controlled cluster trials of CDS is hindered by perceptions that adopting an EMR without CDS is risky for both patients and organisations. The allure of technology and the expected improvements in safety following CDS implementation are drivers for adopting an EMR, making it challenging and somewhat counterintuitive for hospitals to implement EMR without incorporating these safety features for the purposes of a research trial. Senior staff recommended clear communication of trial information to all relevant stakeholders as a key strategy for boosting hospital-level participation in trials.
Previous research has shown that the primary reason for hospitals declining to participate in trials is limited resources to carry out the intervention,17 but this did not emerge as a result in our study. However, we uncovered similar concepts to those described in previous studies on individual recruitment into trials but found that these barriers manifested in a unique way for trials of CDS. For example, perceived risk of participation has been identified as a barrier to individual participant recruitment, with individuals less likely to take part in a trial if they perceive exposure to an experimental or untested intervention as too risky.15 16 Risk emerged as a key theme in our results, however, we observed that the risks to patients, reputation and benefits realisation related to the absence of the intervention, not the intervention itself, reflecting participants’ underlying assumption that CDS alerts improve patient safety. Similarly, barriers related to the intervention in previous research typically relate to sites being unconvinced of the added value of an intervention, over and above usual care processes,17 however, we observed the reverse for CDS, with the value of CDS viewed as too great to participate in a trial as a control site.
The perception that fewer alerts increase risk to patients, and concerns about the legal ramifications of this, have been identified as factors contributing to organisations overalerting users in EMRs.22 This, and our findings, highlight the importance of consultation and clear communication between research teams and prospective organisations, particularly regarding the evidence base and rationale for controlled trials of CDS. Improved communication was the primary strategy proposed to boost hospital recruitment by participants in our study. This also aligns with a key facilitator to individual participant recruitment, as identified in previous research. Clear trial information delivered both face to face and in written format, by a trustworthy and knowledgeable individual with good communication skills, has been shown to increase individual participant recruitment into trials.15 16 Implementation of CDS, like many digital health interventions, is often driven by the potential benefit achieved, rather than actual benefits demonstrated,19 23–26 and increasing stakeholder awareness of this, and the equipoise that currently exists with respect to CDS effectiveness, may abate any major concerns held by both front-line clinicians and executive teams. We recommend engaging with organisations early and tailoring study information to highlight the potential benefits of trial participation to each user group. Understanding the needs and values of stakeholders is viewed as critical for successful recruitment into a trial.27 A recent Delphi study recommended making clear to prospective participants not only the potential benefits and harms of trial participation but how these compare with what would happen if the participant did not take part in the trial.28
To minimise any risk to patients from the absence of CDS, participants suggested additional monitoring for adverse safety events by pharmacists and systems. Consistent with previous research,29 30 interviewees identified a risk of over-reliance on alerts by prescribers not being aware of which CDS functionalities are in place in the EMR. This study highlighted that this over-reliance is particularly a problem for controlled trials of CDS if a specific intervention is a frequently used form of CDS, like DDI alerts, or if the user has transferred from a different organisation, which is not uncommon. Ensuring end-users are aware of available CDS within an EMR, via good alert design (eg, visibility of available alerts in an EMR to end-users)31 and training, is critical for minimising inappropriate over-reliance on CDS.
Limitations
This study describes senior hospital staff’s perceptions of barriers to hospital recruitment. Not all participants were actively involved in their organisation’s decision to participate in our trial of DDI alerts. To preserve anonymity, limited demographic information was collected from participants, however, we acknowledge that some characteristics, like age and career experience, may have impacted the perceptions held and expressed by participants in this study. Complementing interviews with document review (eg, committee meeting decisions) and consultation with all stakeholders involved in trial participation decisions would strengthen these results and potentially identify other barriers to trial participation. Data were collected from senior staff at six Australian hospitals and findings may not be generalisable to other countries or stakeholders.
Conclusions
Barriers to hospital-level recruitment into controlled cluster trials of CDS related primarily to perceptions of risk associated with not implementing CDS as a control site. These perceived risks included reductions in patient safety, reputational risk and increased likelihood that benefits would not be realised following EMR implementation. To counteract these barriers, consultation and clear communication between research teams and prospective organisations are needed, particularly regarding the evidence base and rationale for conducting a controlled trial of CDS, and any ethical concerns surrounding trial participation.
Data availability statement
No data are available. The qualitative dataset generated and analysed during the current study is not publicly available.
Ethics statements
Patient consent for publication
Ethics approval
This study involves human participants and the study received Human Research Ethics Committee approval (HREC 18/02/21/4.07) from Hunter New England HREC and site-specific governance approval from all participating hospitals. Participants gave informed consent to participate in the study before taking part.
Supplementary materials
Supplementary Data
This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.
Footnotes
Twitter @lli_sydney, @JWestbrook91
Contributors MTB, SH, ROD, JW, WYZ, LL and AH designed the study. MTB conducted the interviews, MTB, BAVD and KS analysed the interview data, MM and WYZ contributed to interpretation of interview data. All authors contributed to writing of the manuscript and approved the final manuscript. MTB is responsible for the overall content as the guarantor of this study.
Funding This work is supported by the National Health and Medical Research Council (Partnership Grant APP1134824) in partnership with eHealth NSW and eHealth QLD.
Competing interests None declared.
Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.
Provenance and peer review Not commissioned; externally peer reviewed.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.