Article Text
Abstract
Objective Clinicians need trustworthy clinical practice guidelines to succeed with evidence-based diagnosis and treatment at the bedside. The BMJ Rapid Recommendations explore innovative ways to enhance dissemination and uptake, including multilayered interactive infographics linked to a digitally structured authoring and publication platform (the MAGICapp). We aimed to assess user experiences of physicians in training in various specialties when they interact with these infographics.
Design We conducted a qualitative user-testing study to assess user experience of a convenience sample of physicians in training. User testing was carried out through guided think-aloud sessions. We assessed six facets of user experience using a revised version of Morville’s framework: usefulness, understandability, usability, credibility, desirability and identification.
Setting Setting include Geneva’s University Hospital, a large teaching hospital in Switzerland.
Participants Participants include a convenience sample of residents and interns without restriction regarding medical field or division of care.
Results Most users reported a positive experience. The infographics were understandable and useful to rapidly grasp the key elements of the recommendation, its rationale and supporting evidence, in a credible way. Some users felt intimidated by numbers or the amount of information, although they perceived there could be a learning curve while using generic formats. Plain language summaries helped complement the visuals but could be further highlighted. Despite their generally positive experience, several users had limited understanding of key GRADE (Grading of Recommendations Assessment, Development and Evaluation) domains of the quality of evidence and remained uncertain by the implication of weak or conditional recommendations.
Conclusion Our study allowed to identify several aspects of guideline formats that improve their understandability and usefulness. Guideline organisations can use our findings to adapt their presentation format to enhance their dissemination and uptake in clinical practice. Avenues for research include the interplay between infographics and the digital authoring platform, multiple comparisons and living guidelines.
- MEDICAL EDUCATION & TRAINING
- Protocols & guidelines
- Information technology
Data availability statement
Data used during the current study are available from the corresponding author upon request.
This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.
Statistics from Altmetric.com
STRENGTHS AND LIMITATIONS OF THIS STUDY
We conducted a rigorous user testing and analysis that translated the users’ experience into practical and transferable findings.
Interviews, in the form of guided think-aloud sessions, took place in French-speaking teaching hospital, and may not be fully generalisable to other settings.
We did not investigate knowledge retention, or impact on guideline adherence and related patient outcomes.
Our study did not compare different formats, although the sample of infographics tested included various iterations in design and across the RapidRecs.
Introduction
Incorporating current best evidence into decisions at the point of care is an ongoing challenge for physicians and care providers. An important gap remains between results from clinical research and their implementation in clinical practice, both because of challenges related to the production and dissemination of research findings and to the increasing demands and time constraints of physicians’ daily practice.1–3 The volume of evidence grows exponentially with more than 4000 articles published daily, including more than 100 randomised trial and 20 systematic reviews.4 Most published findings—even from randomised trials—suffer from substantial risk of bias,5 resulting in a majority of clinical decisions being informed by low certainty evidence.6 In this context of information overload, clinicians need trustworthy and readily available evidence to succeed with evidence-based diagnosis and treatment at the bedside.7–9 Haynes Evidence-based healthcare pyramid 5.010 models the information hierarchy when it comes to clinical decision-making. This pyramid highlights the importance of summarised, synthetised and filtered information of preappraised literature in clinical decision-making. It consists of five major layers: studies, systematic reviews, systematically derived recommendations (guidelines), synthesised summaries for clinical reference, and systems (that is the integration of evidence and guidance within computerised decision support systems and electronic health records). Each layer builds on the lower levels and provides progressively more useful information for guiding clinical practice. The need for synthetised information has been highlighted by previous research and guideline adherence may have a positive impact on patient outcomes and healthcare costs.11–14
However, clinical practice guidelines can also suffer from major limitations. They are vulnerable to financial and intellectual interests.15 16 They can still be more consensus based than evidence based, with limited to no involvement from patient partners.17–19 Their development process is cumbersome and time-consuming, and they end up rapidly outdated along the rapid production of primary evidence.20 21 Correa et al22 conducted a meta-review on barriers and facilitators to the implementation of clinical practice guidelines and identified the absence of leadership within organisations, lack of time among healthcare professionals, and doubts regarding the credibility and applicability of clinical practice guidelines as major factors hindering the implementation of guidelines. Major advances in guideline methodology have occurred in the last decade, including standards for their development and assessment of transparency and rigour.7 23 The Institute of Medicine (IOM) has clarified explicit trustworthiness criteria.24 The Grading of Recommendations Assessment, Development and Evaluation (GRADE) working group has developed explicit approaches on how to summarise evidence, rate its quality and move in a transparent manner from evidence to recommendations.25–29 Application of these standards and methods, as well as management of conflict of interest, heavily impacts the strength of recommendations,6 30 31 yet they will not suffice to overcome barriers of their implementation at the point of care.8 32 33
Another challenge relates to presentation formats of guidelines. Sharp et al34 conducted a systematic review examining various formats used for summarising evidence synthesis and their effectiveness and acceptability. The findings suggest that alternative formats to standard systematic review tables improved knowledge and understanding, but many trials lacked adequate reporting on study quality and no ‘gold standard’ of format emerged.
To address these issues, the MAGIC Evidence Ecosystem Foundation (www.magicevidence.org), in collaboration with The British Medical Journal (BMJ), launched the BMJ Rapid Recommendations (also known as RapidRecs), whose aim is to produce and disseminate a new generation of trustworthy, timely and actionable recommendations on the basis of new practice-changing evidence, or complex ignored evidence.35 36 Using an online authoring platform—the MAGICapp (https://app.magicapp.org)—all data are digitally structured for dissemination in various formats and channels: multilayered formats for recommendations, evidence summaries, patient decision aids and widgets for other online platforms.8 37–40 Each RapidRec begins with an interactive infographic at the top, codesigned by MAGIC and the BMJ. The infographic operates as a doorway to each multilayered elements of the online version of the guideline published in the MAGICapp (www.bmj.com/rapid-recommendations).
Infographics—that is, the visual representations of data using icons, illustrations or charts with a minimum of text—have been widely used to disseminate research findings or in medical education.41 Their potential merits among clinicians include the increase of dissemination and readership,42 43 who find them user-friendly and enjoyable.44–46 They answer the need for rapid information retrieval.47 48 They may help overcome statistical illiteracy by facilitating the understanding of complex data with less cognitive load than text.49 50 However, some have raised concern about actual knowledge retention or understanding, suggesting infographics may oversimplify information or prevent in-depth reading.41 42 46 51 52
Less is known about the value of infographics as synopsis of clinical practice guideline.53 This study aims to assess the user experiences of clinicians when they interact with the RapidRecs’ infographics. Our main objectives were to identify guideline formats that help to facilitate usability and understanding or, on the contrary, formats that hinder their use. Since their creation in 2016, RapidRecs have been widely disseminated and reached high level of popularity, as assessed by their high Altmetric scores.54 55 They even have been adopted by the WHO for its COVID-19 therapeutic guidelines.56 They thus provide a unique opportunity to study the usefulness of infographics for the dissemination of trustworthy recommendations for clinical practice.
Material and methods
Study design
We applied a user-testing design to assess user experiences of residents and interns in a large teaching hospital in Switzerland, when using BMJ RapidRecs infographics relevant to their practice and learning. User testing is a widely used method in web design research.57 It tries to capture and study the users experience through interview methods. Seven different facets of user’s experience have been described in Morville’s ‘honey-comb’ model.58 59 These include findability, accessibility, usability, usefulness, credibility, desirability and value. In our study, we decided to conduct non-directed interviews and encouraged participant to ‘think aloud’. These interviews occurred at their workplace. Interviews were recorded and analysed using Morville’s framework for user experience. The research is part of an MD thesis (TH).
Intervention tested: the RapidRec Infographics
The BMJ RapidRecs infographics have been codesigned in a unique collaboration between the BMJ and the MAGIC Evidence Ecosystem Foundation, combining skills from data graphic designers, journal editors, practising clinicians and experts in evidence-based medicine. Each RapidRec has its own infographic, presented in a synoptic view at the top of the publication.
The content of the RapidRec infographics is specific to the clinical question being addressed, while its design is meant to be generic across each RapidRec. For each guideline, an unconflicted and multimultidisciplinary international panel, coordinated by MAGIC and the BMJ, appraised the evidence on a clinical question and issued their recommendations. Each panel included front-line clinicians (generalists and specialised in the question), methodologist, as well as patient partners. Each guideline panel had about 20–25 members, including 2–4 patient partners.35 36 60 The design of the infographic is multilayered, interactive and is organised in a similar generic way across RapidRecs. The content and data of the guideline are authored in the MAGICapp, a digitally structured authoring and publication platform, whose structure applies key elements of the IOM trustworthiness standards, GRADE rigorous methodology and the Evidence to Decision Framework (EtD).24–29 The data in the MAGICapp are then transformed into an infographic in collaboration with data graphic designers. The infographic then operates as a doorway to each multilayered element of the online version of the guideline in the MAGICapp, using widgets technology.61
All existing RapidRecs can be found at the BMJ online portal (www.bmj.com/rapid-recommendations). Figures 1–4 illustrate the static view of one RapidRec on ‘Corticosteroids for sore throat’—the actual infographic being interactive and linking to MAGICapp content (direct link: https://www-bmj-com.ezproxy.u-pec.fr/content/358/bmj.j4090).62 The first layer of the guideline has three parts (figure 1): (1) the population to whom the recommendation applies (or may not apply); (2) the intervention and comparison (which can become several interventions in multiple comparison guidelines); and (3) the actual recommendation(s). The direction (for vs against) and strength (strong vs weak) of the recommendation are both depicted with arrows and text. The arrows are meant to visually convey to the user the four main options according to GRADE. The written statement of the recommendations applies formal GRADE guidance, using the word ‘recommend’ for strong recommendations, and the word ‘suggest’ for weak or conditional recommendations.63 64
Example of an infographic: initial view (reproduced with permission from BMJ).
Example of an infographic: strength of recommendation and selected GRADE (Grading of Recommendations Assessment, Development and Evaluation) summary of findings (reproduced with permission from BMJ).
Example of infographic: key practical issues, values and preferences and additional considerations (reproduced with permission from BMJ).
When the user wishes to see more details, they can ‘click for details’ and access the next interactive layer: the main elements of GRADE summary of findings (figure 2). The magnitude of effects (in absolute terms) and the certainty of evidence are displayed for each patient-important outcome (figure 2). Continuous outcomes can either display absolute values (eg, kilograms, points on a quality of life scale, and/or be categorised as a proportion of responders (eg, according to the proportion of patient who may reach a minimally important difference).65 The quality of the evidence following GRADE guidance—also known as certainty of estimate of effect—can be high, moderate, low and very low quality or certainty.66 67 From each outcome, users can click on the ‘More button’ to access the next layer (figure 3) displaying GRADE plain language summary68 along with the detailed assessment of GRADE domains for quality of the evidence.
Finally, at the bottom of each infographic, are listed key additional considerations identified by the panel, such as statements about patients’ values and preferences, resources or accessibility considerations, or other elements of EtD (figure 3).27–29 Our previous research has shown the critical importance of practical issues in decision-making. These are also generated by the RapidRec process and digitally structured in the MAGICapp. Key practical issues are then also displayed at the end of the infographic.39 40
Participants and setting
The setting of our study was at the University Hospitals of Geneva, in Switzerland, serving from primary and tertiary inpatient and outpatient care. We performed user tests among a convenience sample of physicians in training, focusing recruitment on residents and interns without restriction to the medical field or division of care. We sent to every resident and intern in training an email invitation to participate to our user testing (approximately 1000 people). Responses were voluntary with no additional incentive. A meeting was arranged with the convenience sample of those who manifested their interest spontaneously. Working in the same hospital, some participants were known by the interviewer, yet with no notable conflict of interest (TH). Participants provided written informed consent (see consent form as online supplemental file).
Supplemental material
Data collection
Volunteer residents and interns met individually by the principal interviewer (TH), who had been trained for user-testing and think-aloud sessions by an experienced researcher (TA). The interviewer did not participate in the design of the infographic, nor in any of the RapidRecs’ panels, and thus was independent of known bias, as pertains to our research question. Interviews lasted 30–45 min, within the workplace at the hospital, so that residents and interns were in the usual clinical environment. The interviewer sat next to the participant, looking together at a computer showing the interactive infographics on BMJ’s website. They were no other participants during the interviews nor were there any patient present as it was not part of a clinical consultation. The sessions were audio recorded and transcribed anonymously. The interview started with a few general questions about the participant’s specialty they were training into, their age and years of experience, and previous awareness of the RapidRecs. Participants were then invited to freely choose which rapid recommendations they wished to explore, to ensure the topic was of sufficient relevance to the user. The investigator briefly explained the principle of the think-aloud session, in which the focus was to capture their direct experience while interacting with the RapidRec infographic. The interviews did not use a prespecified set of questions but rather a non-directive think-aloud approach, focusing on following the user experience of the formats displayed, asking open-ended questions and clarification of vague terms, listening carefully and only intervening when strictly necessary as per established guidance on user testing methodology.57 The interview ended with open questions about any general comments or suggestions to improve the tool. After each interview, quick notes were also written down to summarise the overall impression of the interview and help with the analysis.
Data analysis
The think-aloud sessions were transcribed by the principal interviewer. All identifiable features were removed. The interviews were then stored solely in an encrypted folder on the principal investigator’s computer. We analysed content using both deductive and inductive approaches, searching for units of meaning for user-experience, but within predetermined themes according to Morville’s framework.69 In this directed approach, two investigators (TH and TA) coded each element of meaning using a revised version of Morville’s framework, categorising them into the six following facets of user experience: usefulness, understandability, usability, credibility, desirability and identification(figure 4).39 58 59 Within each category, we then coded the quality of the reported experience—that is, showstoppers (preventing further use), major frustration (hindering further use, with participants eventually figuring out the issue), minor frustration, positive feedback and suggestions for improvement of the infographic.39 The coded elements were further discussed iteratively in the team to enhance reliability of the findings. Once we transcribed, coded and classified all comments into Morville’s categories, we searched for themes that came up at least more than twice. All comments regarding the same theme were then analysed together. We selected representative quotes for the issues most often experienced or that felt critical to users.
Patient and public involvement
Although each RapidRec tested included patients, no patient partner was involved in the design and conduct of the present study.40
Results
Participants
We performed 32 think-aloud sessions, 9 among residents and 23 among interns, with a median age of 29 years (IQR 26–32 years). Men and women were represented equally (15 men and 17 women). Although all reported a good understanding of English, only one was a native speaker. Most participants reported between 2 and 4 years of clinical experience, and nine reported some previous experience in clinical or non-clinical research. At the time of their interview, 13 (40%) were in the Division of General Internal Medicine department, five (15%) in Paediatrics, two in the Emergency Room, two in Gastroenterology, two in Neurology, one in Nephrology (3%), one in Haematology, one in Rheumatology, one in Anaesthesiology (3%), one in Maxillofacial surgery, one in Urology (3%), one in Gynaecology-obstetrics (3%) and one in Psychiatry. More than half (17 participants) reported that they had heard of the BMJ Rapid Recommendations before, as several had been mentioned in educational rounds, but few had directly accessed the publications and their infographics.
Overview of user experiences
Participants chose freely among available RapidRecs on the BMJ (www.bmj.com/rapid-recommendations) at the time the user testing was conducted, between March 2019 and April 2020. Topics covered in the tested infographics included: transcatheter versus surgical aortic valve replacement for aortic stenosis,70 antiretroviral therapy in pregnant women living with HIV,71 corticosteroids for sore throat,62 antibiotics after drainage for uncomplicated skin abscesses,72 type of needles for lumbar puncture,73 patent foramen ovale closure, antiplatelets or anticoagulation for cryptogenic stroke,74 corticosteroid therapy for sepsis,75 prostate cancer screening with prostate-specific antigen,76 oxygen for acutely ill medical patients,77 dual antiplatelet therapy for transient ischaemic attack and minor stroke,78 low-intensity pulsed ultrasound for bone healing,79 treatment for subclinical hypothyroidism,55 gastrointestinal bleeding prophylaxis for critically ill patients,80 subacromial decompression surgery for shoulder pain81 and arthroscopic surgery for arthritis.54 We discuss findings in each of the facet of Morville’s user experience (See figure 4), with illustrative quotes from the interviews. Table 1 lists a summary of the key findings.
Summary of findings per domain of Morville’s user experience with infographics
Usefulness
Most users found the infographics useful, allowing them to rapidly grasp the essence of the recommendation, its rationale and the overall quality of the underlying evidence. Key elements supporting rapid insights were the visual organisation of the infographic and its interactive multilayered feature.
It’s quite telling visually, it is clear. It’s great. Sometimes it’s hard to find a good article, with good references, where the essential come out quickly. Frankly, yes, I’ll use it in practice, I think it’s pretty cool. When you have to present a meta-analysis, why bother presenting tables, when this speaks more to people, it’s more interactive, and, in the end, it gives exactly the same messages but more quickly. I prefer that to tables.—Intern in General Internal Medicine
It allows you to answer a question that’s very pertinent by having arguments to fall back on. We can say yes because of this, or no because of that. All the while keeping in mind that there may be some evidence that is of insufficient quality, and it allows us to understand the state of the literature fairly quickly—Intern in General Internal Medicine
Users had contrasting views about the optimal amount of information they needed in the infographic. Some felt that information was missing, such as number of randomised clinical trial included, and how many patients they included. This information is part of a separate infographic or table in each RapidRec manuscript, but participants tended not to scroll down the webpage to find it, as they most often remained within the infographic. Several participants were looking for a p value or CIs. These were also available in the MAGICapp, clickable at the bottom of each summary of finding or at the end of the infographic, but only few opted to click on it. Those looking for p values and CIs were typically not familiar with the GRADE wording, such as risk of bias, inconsistency, imprecision, indirectness and publication bias present at the deeper layer of the infographic, and thus did not recognise its relevance to the quality of the evidence. In contrast, most participants noticed the stars rating the evidence quality into very low, low, moderate and high and felt they understood their meaning. Other information that participants felt was missing was additional information about drug type, dosage and duration of treatment.
Fewer participants found the infographic displayed too much information and were initially put-off by the summary of findings at the deeper layer, although some perceived it required getting familiar to the format.
Every time I opened the thing [i.e., more details] I thought: wooh there’s so much info, there’s lots of little colours, numbers, for each thing you have to think about what it means… probably more time than if you just read the abstract of the metanalysis. […] To understand the information, you have to make a little more effort, but once you’ve made the effort and you understand how it works, you're faster… I think it’s a question of getting used to it—Intern in General Internal Medicine
Several participants were puzzled by some core GRADE concepts—such as weak recommendations and their relationship with patients’ values and preferences—or even challenged their usefulness in a clinical practice guideline.
I think they are pushing the definition of a guideline. Because a guideline tells you what you should do—Resident in Paediatrics
Others appraised them with scepticism expressing that they wouldn’t change their practice on their basis, or would need to discuss them with more experienced clinicians.
The recommendation is weak, this is why I wouldn’t apply it to my clinical practice—Intern in Haematology
[…] the weak recommendations I’d kind of ask myself if it’s right to follow them. I’d probably look at it with someone who has more experience than me.—Intern in Haematology
Similarly, about half of the users felt somewhat puzzled by the box providing the panel’s statement on ‘values and preference’. Some participants read the statement as some sort of final conclusion or a written summary of the recommendation. Other correctly interpreted the statement but expressed that they were not interested in that type of information, while some did not seem to notice it.
We get the impression that the recommendation is strong, based on patients' values and preferences, but not on “medical” grounds. So, you told me that they included patients, but I wonder if we shouldn’t just have the medical opinion, and maybe the patients’ opinion at a second stage, and maybe it shouldn’t be part of the recommendation.—Resident in General internal medicine
Understandability
All users easily understood the population and to whom the recommendation did or did not apply to (see figure 1). A large majority understood the recommendation, its direction and strength and main summary of findings, many found the information intuitive, although some users were quicker than others in understanding the key messages.
Participants reported that a key feature supporting their understanding was that infographics were similar in format throughout the different recommendations, suggesting that there may be a learning curve in understanding the generic part of the infographic.
I think when you’ve seen two or three, it’s easier to use them and you find the information faster […] it’s always the same presentation with exactly the same outcomes, so that’s pretty good.—Intern in General Internal Medicine
Once you’ve seen it, it’s clear how it works and how it’s structured, and I think it’s really quite good. The winner is clear between that and reading a meta-analysis, it’s much clearer.—Intern in General Internal Medicine
The use of colour and the organisation of the visual field enhanced understandability. All information concerning the same intervention is presented on the same side and is of the same colour (see figures 1 and 2).
What I like about it is that it’s very simple and minimalistic to start with. And in fact, it allows us to get straight to the point, with recommendations and results, in a very strategic way, using colour codes, with a real differentiation of the visual space, with everything on the left concerning surgery and everything on the right concerning conservative treatment, highlighting the arrows in favour of a beneficial outcome. The star code is also a nice touch.—Intern in Gastroenterology
Understandability was mediated by both symbols and written text, one complementing each other. For example, some participants expressed minor confusion understanding the symbols supporting the direction and strength of the recommendation (see also previous ‘usefulness’ section) but understood it better or felt reassured by the written summary underneath the arrows.
So, there’s a strong recommendation to give hormones… Ah no, it’s NOT to give hormones. So, I corrected myself by reading the summary sentence, see? So it’s always good to make sentences.—Intern in Psychiatry
Overall understandability of the summary of finding was also good, although a few participants were at first intimidated by numerical information, especially when associated with visual overload (see usefulness).
The numbers themselves… I think you could have a box to find out more with the numbers, and then just have the sentences that tell you… for someone like me who isn't really into research.—Intern in Paediatrics intern
So, in fact… There are numbers… so it’s true that since I do surgery and not really internal medicine, the numbers… I don't really have the values in my head, and there, I don’t see any units—Intern in Surgery
A plain language summary for each outcome is included in the infographic, when clicking on ‘More’ button. One user mentioned a better understanding with the plain language summary, when clicking on the ‘more’ button for an outcome.
Ahhh but this is good, because it’s much clearer when you open and see the little bubble, it’s much simpler. Frankly, I think it’s much clearer with the little bubble than with the arrows. Arrows aren’t very intuitive for me. If I’d opened it like that, I’d have understood straight away.—Intern in Paediatrics
Understanding of the GRADE domains of quality of evidence was limited for about half the users (see also previous ‘usefulness’ section), and they looked for definitions. Infobuttons (information appearing when hovering over an item) were often received with enthusiasm.
It’s quite nice that there’s a little commentary to help understand the GRADE score.—Intern in Paediatrics
Usability
Usability was very good in general. Only minor frustrations were identified such as too small a font size, or too much material on the same page, this was particularly the case when presented with the summary of findings containing a lot of information and numbers.
Users appreciated the colours, but some felt they conveyed some hidden colour coding, or even overlooked information that was not highlighted with colour and felt that the information was generally less important.
It feels like there is a hidden color code, the PPI and H2 are in green are better than the sucralfates in orange. I get the impression that this means there are not as good, when it’s probably just the color code.—Resident in General internal medicine
I didn’t stop on preferences and values because they’re small and gray. Intern in Paediatrics
Dropdown menus helped navigate the multilayered content, but the number of layers was overlooked by several users. While the ‘click for more details’ button going from the recommendation to the summary of findings was clicked by many, fewer spontaneously clicked on the more discreet ‘more’ button to get the plain language summary and GRADE full quality of evidence assessment, which stresses the importance of making dropdown menus visible enough.
And it’s nice to have drop-down menus that open only on demand, if you really want to back up what’s been pre-summarized and visually pleasing to read. Intern in Gastroenterology.
In addition to improve understandability (see above), the use of infobuttons (information that appeared when hovering over a text or an icon) enhanced navigational experience.
it’s very nice when you hover the mouse over the abbreviations and the whole name is immediately visible Intern in Paediatrics
Credibility
Most users found the RapidRec infographic, highly credible, particularly given its affiliation to a major clinical journal like the BMJ, and the transparency in the date of publication.
Yes, I trust the BMJ, I think we can easily rely on it. If you have any doubts, you can look up what you’re looking for in the studies. Resident in the Emergency Room
Several participants spontaneously checked the guideline’s authors, looking up for familiar names.
As discussed above in the section on the infographic’s usefulness, some users felt that some information lacked (ie, the number of trials and patients included), which affected their perception of the credibility of the guideline. Most of them did not scroll down the manuscript to find the information, which could have been found in another infographic or table, as they seemed to expect it at the top level. The same issue applied to those who expected to find p values and CIs to appraise impression at the top level, in place of the full GRADE assessment of the quality of evidence, although more comprehensive. For the same users, use of the GRADE approach did not enhance their experience of trustworthiness, as they lacked familiarity about guideline standards and methods.
Identification
Even though most users easily identified with the content of the RapidRec that they chose to explore, and found it relevant to their practice, they judged it on how aligned it was their current knowledge and practice. When the recommendation was in line with it, they were more inclined to consider the content as trustworthy. Conversely, users expressed more doubts when the recommendation differed from their current practice, in particular when the recommendation was issued as weak.
I think it’s aimed more at general practitioners than specialists […] Sometimes we see 80-year-olds with high PSA levels, and we think it’s useless. So, I understand why they say no screening, but I'd be inclined to nuance this a little more, by saying that the shared decision should be made with someone who knows more than what’s being said here.—Intern in Urology.
Although they were free to choose among the list of existing RapidRecs on the BMJ, these did not cover all fields of medicine, and some physicians, such as residents in psychiatry or paediatrics commented on how they would be keener to explore a content closer to their area of interest.
Desirability
One aspect that diminished desirability were the ads and the text surrounding the infographic. Some users looked for a way to show the infographic full screen, which is not possible in its interactive form.
It starts with these huge ads on top of the page that take up half of the screen. Then there are links everywhere, you can click on a thousand things. You can like and tweet the article before having read it. It is confusing at the start, and you don’t really know where to begin—Intern in General internal medicine
However, participants were generally positive about the look of this presentation format. Recurrent adjectives to describe the RR were ‘beautiful’ and ‘clear’, particularly the use of icons, images, colour, the visual organisation of space and text.
Some mentioned how they could use this presentation format in the future, for example, during their next journal club or to ‘brag about their newly acquired knowledge’ at work:
All this is real data? There’s no catch? There’s no “lorem ipsum” just filling in text like that? So tomorrow I can sound clever when I go to work?’ Intern in Neurology
Discussion
Principal findings
Clinicians need trustworthy clinical practice guidelines to succeed in evidence-based diagnosis and treatment at the bedside, but their dissemination remains limited and their implementation lagging. Our study assessed the user experiences of physicians training in various specialties when they interact with the RapidRecs’ multilayered interactive infographics.
We found that users had a mostly positive experience using these infographics. They were understandable and useful to rapidly grasp the essence of the recommendation, rationale and supporting evidence in a credible way. The usability of multilayered features through drop-down menus, the hover over infobuttons, the colours and the visual organisation were enjoyable and supported understanding. However, some features could be overlooked if too discreet or diluted in content, and colours could unwillingly convey more (or less) importance or suggest some hidden colour code. Visuals, numbers and text complement each other to enhance understandability. Some users felt intimidated by numbers or the amount of information, although they perceived there could be a learning curve using generic formats across RapidRecs. Plain language summaries helped but could be highlighted instead of being available only at deeper layers. Finally, several users had limited understanding of key GRADE domains of the quality of evidence and were puzzled by the implication of weak recommendations, or the panel’s statements about patients’ values and preferences.
Comparison with current literature
Infographics have been widely used to disseminate research findings or in medical education.41 But their use to disseminate clinical practice guideline has been limited, particularly the new generation of clinical practice guideline following trustworthiness standards and methods, including GRADE and the EtD. Our results mostly concur with those of Van Bostraeten et al53 who also conducted user testing among five selected BMJ RapidRecs suited from primary care. Although these were conducted among a difference population—namely GPs in outpatients practice, with a mean age of 48 years old—both positive and challenging experiences were remarkably similar. GPs reported that the selected infographics were time-efficient, easy to understand, trustworthy and supportive for decision-making. They reported similar challenges around the use of complex scales and terminology related statistics and guideline methodology.53
Indeed, the most challenging finding of our study is also that, despite the user’s overall positive experience of the infographics, the format still fails to intuitively convey the meaning and implication of key GRADE concepts of evidence appraisal and strength of recommendations. From a substantial proportion of our clinicians in training, only a few were familiar with the terms ‘risk of bias’, ‘imprecision’, ‘indirectness’, ‘inconsistency’ and ‘publication bias’. This observation was surprising considering their exposure to GRADE in their medical curriculum, and its widespread use across many guidelines, including commonly used online summaries such DynaMed or UpToDate.6 These challenges had been observed in previous studies as has been highlighted by the review of Sharp et al.34 In their systematic review, they reported seven articles that highlighted the need to be explicit about how the scale is used and recommended to provide distinct explanations of the GRADE rating scale. The use of hover buttons to include definition of these terms might have a positive impact on understanding. The negative attitude, even paternalistic for some participants, towards the incorporation of patients values and preferences in the panel’s rationale, was previously observed by Brandt et al82 and conveys a well-known discomfort of clinicians with the very notion of weak recommendations. The GRADE working group has produced a whole body of methodological research to understand its numerous causes6 64 83 84 and attempts to innovate visual or narrative presentation formats.85 86 This is particularly important as at least two-thirds of recommendations across all medical fields are likely to be weak according to GRADE guidance.38 The solution may well be beyond the scope of infographics, or any specific formats that hope to intuitively convey these notions so that they feel acceptable and useful to clinicians, but rather through combined didactic efforts on guideline methodology and shared decision-making.38 87 88
Strengths and weakness of the study
Our study has limitations. Given its qualitative nature, our sample size was small. However, by processing iterative interviews, we felt that no substantial new theme was arising, suggesting that we had reached saturation and further inclusion of users would likely result in repetition. Another limitation is that interviews took place in a French-speaking teaching hospital, and in a controlled and think-aloud setting that may not be fully generalisable, although the intention of interviewing resident and interns at the workplace was to be close to their clinical practice and learning. User experience may differ when accessing infographics around a specific patient’s care. Moreover, we did not investigate knowledge retention, or impact on guideline adherence and related patient outcomes. Finally, our study did not compare different formats, although there were differences of design across the RapidRecs tested.
Strengths of our study include a thorough process of user testing conduct and analysis, that has been validated to translate the user’s experience into practical and transferable findings.57 We conducted it among a population of clinicians in training with different backgrounds and specialties of interest. Our study adds to a rising body of evidence on how infographics can be used for the dissemination and uptake of clinical practice guidelines.44 46 50 53
Meaning of the study
The BMJ RapidRecs, and these accompanying infographics, are the result of a unique collaboration and cocreation between independent researchers and a wide-audience publisher. This model is offering new opportunities of interdisciplinary innovation between graphics designers, journal editors, practising clinicians, experts in evidence-based medicine and patient partners. Ongoing and future avenues of research include the interplay between infographics, digital authoring and publication platforms—including the enhanced support from A.I.—and the adaptation of infographics to specific methodological challenges such as complex risk stratification and personalised guidance,89 multiple comparisons of interventions90 ,91 and living evidence and guidelines.56
Data availability statement
Data used during the current study are available from the corresponding author upon request.
Ethics approval
The Geneva’s cantonal ethical commission for research (Commission Cantonale d’Ethique de la Recherche, CCER, Req-2019-00048) reviewed the protocol and exempted this study. Participants gave informed consent to participate in the study before taking part.
Acknowledgments
We wish to thank the BMJ, and in particular Dr Helen Mac Donald (BMJ Editor) and Dr Will Stahl-Timmins (BMJ Data Graphics Designer) for their invaluable contribution in the codevelopment and codesign of the infographics and RapidRecs formats
References
- 1.↵
- 2.↵
- 3.↵
- 4.↵
- 5.↵
- 6.↵
- 7.↵
- 8.↵
- 9.↵
- 10.↵
- 11.↵
- 12.↵
- 13.↵
- 14.↵
- 15.↵
- 16.↵
- 17.↵
- 18.↵
- 19.↵
- 20.↵
- 21.↵
- 22.↵
- 23.↵
- 24.↵
- 25.↵
- 26.↵
- 27.↵
- 28.↵
- 29.↵
- 30.↵
- 31.↵
- 32.↵
- 33.↵
- 34.↵
- 35.↵
- 36.↵
- 37.↵
- 38.↵
- 39.↵
- 40.↵
- 41.↵
- 42.↵
- 43.↵
- 44.↵
- 45.↵
- 46.↵
- 47.↵
- 48.↵
- 49.↵
- 50.↵
- 51.↵
- 52.↵
- 53.↵
- 54.↵
- 55.↵
- 56.↵
- 57.↵
- 58.↵
- 59.↵
- 60.↵
- 61.↵
- 62.↵
- 63.↵
- 64.↵
- 65.↵
- 66.↵
- 67.↵
- 68.↵
- 69.↵
- 70.↵
- 71.↵
- 72.↵
- 73.↵
- 74.↵
- 73.↵
- 76.↵
- 77.↵
- 78.↵
- 79.↵
- 80.↵
- 81.↵
- 82.↵
- 83.↵
- 84.↵
- 85.↵
- 86.↵
- 87.↵
- 88.↵
- 89.↵
- 90.↵
- 91.↵
Footnotes
Contributors TA was involved in the design of the study and is the guarantor. TH collected the data. TH and TA analysed the data. TH wrote the initial manuscript. PV and TA revised the manuscript. All authors read and approved the final manuscript. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted. The lead author affirms that the manuscript is an honest, accurate and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any discrepancies from the study as planned (and, if relevant, registered) have been explained.
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests TA is the Chair and PV the Chief Research and Innovation Officer of the MAGIC Evidence. Ecoystem Foundation, who coinitiated and is coordinating the BMJ Rapid Recommendations. TA has participated to the design of all infographics. The other authors declare no other competing interest. All authors have completed the ICMJE uniform disclosure form at (59) and declare no support from any organisation for the submitted work; no financial relationships with any organisations that might have an interest in the submitted work in the previous three years; no other relationships or activities that could appear to have influenced the submitted work.
Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.
Provenance and peer review Not commissioned; externally peer reviewed.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.