Article Text
Abstract
Introduction Rapid systematic reviews (RRs) have the potential to provide timely information to decision-makers, thus directly impacting healthcare. However, consensus regarding the most efficient approaches to performing RRs and the presence of several unaddressed methodological issues pose challenges. With such a large potential research agenda for RRs, it is unclear what should be prioritised.
Objective To elicit a consensus from RR experts and interested parties on what are the most important methodological questions (from the generation of the question to the writing of the report) for the field to address in order to guide the effective and efficient development of RRs.
Methods and analysis An eDelphi study will be conducted. Researchers with experience in evidence synthesis and other interested parties (eg, knowledge users, patients, community members, policymaker, industry, journal editors and healthcare providers) will be invited to participate. The following steps will be taken: (1) a core group of experts in evidence synthesis will generate the first list of items based on the available literature; (2) using LimeSurvey, participants will be invited to rate and rank the importance of suggested RR methodological questions. Questions with open format responses will allow for modifications to the wording of items or the addition of new items; (3) three survey rounds will be performed asking participants to re-rate items, with items deemed of low importance being removed at each round; (4) a list of items will be generated with items believed to be of high importance by ≥75% of participants being included and (5) this list will be discussed at an online consensus meeting that will generate a summary document containing the final priority list. Data analysis will be performed using raw numbers, means and frequencies.
Ethics and dissemination This study was approved by the Concordia University Human Research Ethics Committee (#30015229). Both traditional, for example, scientific conference presentations and publication in scientific journals, and non-traditional, for example, lay summaries and infographics, knowledge translation products will be created.
- health economics
- health policy
- public health
- statistics & research methods
This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.
Statistics from Altmetric.com
STRENGTHS AND LIMITATIONS OF THIS STUDY
The eDelphi process is a well-recognised and highly structured method for consensus building.
Understanding potential differences in research priorities will be made possible by including a variety of participant profiles, researchers and key end users (such as policy-makers, guideline producers, healthcare professionals, etc).
The modified eDelphi approach, using an online format, although it may elicit challenges, can also allow for faster data collection, a broader range of individuals across the globe, is more cost-effective than in-person Delphi approaches and is less susceptible to the judgements of group members with higher status.
Although this study is an important addition to the literature in the evidence-synthesis field, and it can serve as a ‘road-map’ for future rapid systematic review (RR) methodological studies, it is only the first step towards refining the conduct of RRs in a more time-efficient way.
Background
Evidence syntheses (eg, systematic reviews (SRs)) are a useful strategy for a number of uses and domains, notably to summarise evidence around a specific question.1 In a health context, findings from SRs have been used to make decisions for: clinical practice, normally through clinical practice guidelines; healthcare systems and shaping policy.1 2 However, conducting a full SR is time-consuming, sometimes taking up to 2 years to conduct,3 by which time the scientific literature may have already moved on, and expensive, with an estimated cost of at least US$100 000 needed for a high-quality SR.4 5
To address the challenges of SRs, the concept of rapid evidence products has been introduced, including inventories, rapid response briefs and rapid systematic reviews (RRs).6 RRs result from an evidence synthesis approach that uses streamlined procedures,7 8 so certain methodological elements are simplified or omitted compared with SRs.9 Currently, RRs are being conducted to answer urgent questions and/or to support decisions where there is limited time and/or resources, that is, in situations where time-efficiency and cost-efficiency are key.10 11 For example, RRs have been extensively used in addressing issues related to the COVID-19 pandemic.8 12 Preliminary evidence suggests that the conclusions reached by RRs are typically consistent with those of SRs.10 In addition, when applied to policy decision-based health technology assessment reports, RRs have been shown to positively impact the healthcare system, resulting in a reduction of expenditures.13 14
The use of high-quality evidence summary methods is essential to providing reliable results. For traditional SRs, there are well-defined, prespecified methods, for example, for conducting searches, selecting relevant studies, appraising their quality and synthesising the available evidence to answer the research question, which ensure quality and reduce bias.3 However, though methodological rigour and transparency are still essential to have representative and reliable results in RRs,8 there is a lack of standardised methodologies on how to adapt SR methods to be able to reliably perform an RR.15 16 Several studies and reviews,15–17 have noted this lack of consensus in the methodological approaches being used for RRs, highlighting heterogeneous nomenclature and terminology being used to describe the same concepts, and the use of varied methodologies without a clear rationale behind the choices being made.
In 2017, the WHO commissioned a guide on how to perform RRs, which explored various approaches. The guide emphasised that methods can be simplified at any stage of the review process and that decisions should consider the resources at hand and be customised to the needs of the decision-makers.6 The Cochrane Initiative has also produced some methodological guidance for RRs,18 but the impact and costs of each approach are still unclear. Evidence Synthesis Ireland, using the James Lind Alliance method, identified RR research priorities.19 Among the top ten questions generated, three focused on methodological issues but in relatively broad categories.
The current study will build on the findings from Evidence Synthesis Ireland by further exploring more focused questions around RRs methods, that is, the stages between question generation and report writing. The identification of these unanswered questions is required to design and develop methodological studies that can then inform the conduct of RRs. For example, questions about how many databases should be included, database search limitations, and if peer review is necessary for all steps have not yet been answered. Given the number of areas that still need to be explored, the small amount of current available evidence, the limited available resources to conducted methodological studies, and the lack of general consensus on where to start, the aim of this project is to elicit a consensus from RR experts and interested parties on what are the most important methodological questions to improve time-efficiency of RRs, and, ultimately, create a prioritised research agenda for the field to address.
Objectives
To identify and compile the main unanswered questions related to the methods used in conducting time-efficiency RRs, specifically from the stage after generating the research question to just before writing the final report.
To create a priority list of the most crucial questions regarding RRs methods that need to be addressed.
Methods
The study will follow the general eDelphi process20–22 and the guidance on Conducting and REporting DElphi Studies.23 There will be an initial generation of potential research areas, followed by multiple rounds of an online survey for ranking, and then a final consensus meeting. The eDelphi process is particularly useful in surveying areas of uncertainty and obtaining consensus.20 24 This method has the advantage of enabling each participant to express views impersonally, it is low resource and flexible,25 and it has been widely used in health research.26 After ethical approval, the study will start in March 2022, with the first survey round starting in June 2022 and the last round in being finalised in January 2023. The consensus meeting will then occur in the period of June to September 2023.
Given the focus on efficiency, rather than just quality, the eDelphi will ask participants to answer: ‘How important would answering this question be to improve the time-efficiency (balance between the time taken and the quality of the final results) of a systematic RR in a particular field?’.
Participants
The sample will consist of two key groups: international experts who have published RRs or undertaken methodological research in RRs and knowledge synthesis; and key end-users. To standardise the level of expertise, all experts will self-identify, answering eligibility questions, on the basis of having: verifiable experience in designing or delivering evidence summary research; participation in at least one RR; having ≥5 years of research experience; and self-rating their knowledge on evidence synthesis as ≥7 on a 0 (no expertise) to 10 (expert) point Likert-like scale. We will also include interested parties (eg, guideline and policy developers, end-users (public and patients), industry members and journal editors) who have had previous experiences in participating in any aspect of evidence synthesis.
A recruitment email will be distributed by our global partners through their contacts lists, for example, the International Behavioural Trials Network (IBTN, https://www.ibtnetwork.org/), the Strategy for Patient-Oriented Research (SPOR) Evidence Alliance (https://sporevidencealliance.ca/), COVID-END (https://www.mcmasterforum.org/networks/covid-end). In addition, as performed by Tricco et al15 organisations that produce RRs, identified through the International Network of Agencies for Health Technology Assessment’s (https://www.inahta.org/) list, will be asked to distribute the study invitation to members of their group. The recruitment email will provide a link to access the information about the study and the consent form. There are no restrictions on the country of origin of the participants, but all study-related information will be provided in English.
Providing consent
The informed consent forms will explain the objective, procedures and other details that are important to participants (online supplemental material). Participants will be asked to read the ethics board-approved information/consent forms and provide agreement by checking a box confirming that they have: reviewed the information/consent form; consent to participate in the survey, and understand that their participation is voluntary and entirely confidential. The contact details of study team members will be listed in the information/consent form in case they have queries. There will be two consent forms, one for the eDelphi rounds and one for the consensus meeting. LimeSurvey, will be used to obtain consent, as well as to distribute the surveys.
Supplemental material
Initial topic generation
A core group of experts in evidence synthesis, mainly within the biomedical sciences, referred to as the Central Scientific Committee (CSC), and drawn from the leadership of the SPOR Evidence Alliance, IBTN, COVID-END and notable published scholars, will generate a list of methodological questions that they think are relevant to RRs. The items will be specific and focused, in order to be able to generate specific research questions rather than broad conceptual areas.
The included topics will cover the period after the review question has been generated and before the creation of the final report, for example, search strategy, studies selection (level one and two screening), data extraction, risk of bias appraisal and synthesis. The item list will also be drawn from the WHO guide for RRs,6 the Delphi process on RR methods,15 and the Priority III study19 to form the initial ‘long-list’ of items.
This phase of the study will take around 3 months to ensure the inclusion of as many appropriate items as possible.
Online survey
The eDelphi process will involve approximately 50 RRs experts and end-users, who will be asked to complete at least three rounds of online questionnaires, spaced around 1 month apart. Each survey round will be open for about 5 weeks, sufficient time for participants to complete it. A system will tag data to individuals and provide them with their scores from previous rounds, while also reporting the summated data.
Prior to round 1
The initial survey will include basic demographic information, including eligibility questions (ie, years of experience, job title, country and province of residence, age group and sex). Once they agree to participate in the study, participants will be provided with more specific sociodemographic questions (online supplemental material) and the ‘long-list’ of survey items from the previous phase.27 We will only provide the survey to those agreeing to participate to prevent attrition biases.28
Round 1
As per our previous eDelphi projects (eg, Dragomir et al29), participants will rate the importance of suggested items (‘How important would answering this question be to improve the time-efficiency—balance between the time taken and the quality of the final results—of a RR in a particular field?’), focusing on the concept, rather than on the wording. Importance can be rated as: low; medium or high (table 1). For all items that an individual rates as high importance, they will be asked to rank them in order of priority (1=highest priority, 2=2nd highest, etc) until all items are ranked. Specific questions with open format responses will allow for modifications to the concept of items. Participants will also be able to add new items that they believe were missing in the initial round.
Classification of the items
Responses will be collated and summarised.26 Any items rated as low by 50% or more of the participants will be excluded, a consensus threshold that is similar to those adopted in other Delphi studies.24 29 As this is the first round, the threshold will be lower than the following rounds. The CSC will review comments and make necessary changes to items or add new relevant items.
Round 2
Participants will be provided with the percentage of respondents ranking each item as high priority, as well as their ratings in the previous round. They will be able to re-rate the perceived importance of each item, as well as the importance of any new items. They will also be asked whether they agree with items excluded from round 1 or if any essential items are still missing. The items for which ≥75% of people disagree with the exclusion of will remain on the main list for the next round. For all items that an individual rates as high importance, they will be asked to rank them in order of priority (1=highest priority, 2=2nd highest, etc) until all items are ranked. Items rated as low by 75% or more of the participants in round 2 will be excluded.29
As in round 1, open-format questions will allow suggestions for modifications to the items or the addition of new items. The comments will be reviewed by the CSC and changes or additions will be made as needed.
Round 3
A summary of round 2 will be provided, including the percentage of respondents rating each item as high priority, as well as their own rating. Participants will re-rate and re-rank the remaining items. After round 3, we will generate a final list of items for discussion at the consensus meeting (those believed to be of high importance by ≥75% of participants). Three rounds should allow us to reach stability and agreement about most items.28 30 Information about deviant cases will be shared with the consensus group.27
Security of the data
All data that we capture will be stored on secure servers located within Canada, with only information necessary for the research study being collected. All information obtained will be kept strictly confidential, within the limits of the law. To preserve the confidentiality of the data, a code number known only to those directly involved with this research project will be assigned to each participant, and any personally identifiable information will be stored in a secured computer file.
Consensus meeting
This step will aim to detail the final items to be included in the priority list.
Participants
Participants will be invited from the eDelphi phase and selected purposively by the research team to include individuals with a variety of backgrounds (eg, country, academic level, research context), and that had selected the box showing their interest in participating in the consensus meeting. Approximately 25 people will be invited to an online meeting, a size that balances diversity of opinion with meaningful opportunities for interaction,31 and maximises the ability to achieve consensus.
The individuals selected will be contacted by email, with a link that provides access to the information and consent form of the consensus meeting. After accepting, participants will access the Zoom platform with an invitation link sent by email.
The meeting will be recorded to aid with the generation of the final report. Zoom’s inbuilt anonymous voting system will be used for people to be able to vote on the inclusion or exclusion of items.
Meeting structure
Established nominal group technique methods will guide the consensus meeting.26 32 The summary of the results of the previous work will be provided in advance to ground conversations on empirical information and to facilitate cohesive discussion during the meeting.27 The meeting will start with formal presentations. Using a triangulation approach,33 34 we will then lead a structured discussion of each proposed item.35 An experienced, independent facilitator will conduct the discussions.27 Participants will discuss and vote (using anonymous e-ballots), with the potential for a re-vote if needed,28 with only items supported by at least 75% of participants being adopted.27
Anticipated output
The consensus meeting will generate a summary document detailing the questions that will generate the final priority list. This list draft will be circulated to the consensus group participants who will be asked to check if the document accurately represents the discussions and decisions made during the meeting.35 Then, we will distribute a final version of the document to all eDelphi participants to seek feedback on its wording and content and to assess whether the consensus meeting accurately captured their opinions.27
Data analysis
The research team will analyse the sociodemographic characteristics of the participants using raw numbers, means and percentages. For each round of data collection, the frequency of participant ratings for each item will be used to determine the percentage of low, medium or high for each item. For the ranking question, each ranking position will receive a score with the highest position receiving the lowest score. The average score of each item will be calculated by dividing the sum of scores attributed to that item by the number of participants that ranked it. An ascending order will be presented, with the first item, considered the most important one, that is, the one with the lowest score. Data on average rank and the number of individuals providing data will be included in summary tables.
Team members
The project will be organised and developed by two main groups: the CSC and the Coordinating Research Team. The full list of members is available on the website (https://mbmc-cmcm.ca/projects/edelphi/). The CSC will be responsible for: the review and editing of the initial list of methodological items; providing feedback on the survey structure and project plan; providing feedback on the results of each survey round (agreeing on the items that participants may suggest, dropping of items, etc); and helping to share the eDelphi with their networks. The research team, the Montreal Behavioural Medicine Centre, will be responsible for: creating and delivering on the project timelines; creating project documents; setting up and organising the surveys.
Patient and public involvement
Given the emphasis on the methodological aspects of the RR process, with researchers being the primary target end-user of this work, we decided to not include patients in the CSC. The eDelphi does include interested parties, for example, guideline and policy developers, end-users (public and patients), journal editors, from whom we will draw on for the final consensus meeting, to ensure that the final document will have direct input from all related groups. In addition, we will leverage interested parties in the creation of a variety of knowledge translation products, for example, lay summaries, public-facing presentations, infographics, etc.
Expected outcomes and limitations
The Delphi process is a well-established consensus-building process that will provide us with a good picture of the priority questions that need to be answered regarding the methodological conduct of RRs. The present study will generate a list of specific and focused questions, which can be used to prioritise research questions and to design future methodological studies that will answer those questions. These will ultimately create an evidence base for evidence synthesis researchers when deciding the best approaches to perform a RR.
While this research represents an important initial stage towards refining the conduct of RRs in a more time-efficient way, it will not provide definitive answers on the conduct of RRs. In addition, the response rates and representation of different profiles, perspectives and experiences of participant’s cannot be guaranteed. However, the breadth and diversity of the recruitment strategy will likely help mitigate this issue. Finally, the terminology used might be interpreted differently across individuals from different domains and backgrounds. To try and mitigate against this an extensive list of definitions will be used and we will emphasise that items need to be evaluated based on the concept, rather than on the wording.
Ethics and dissemination
This study was approved by the Concordia University Human Research Ethics Committee under the Certification Number 30015 229.
The dissemination plan includes both traditional academic knowledge products, for example, presentations and scientific meetings and publication in peer-reviewed journals, as well as other knowledge dissemination products, for example, lay summaries, public-facing presentations and infographics. We will also leverage social media, via the members of the CSC and related organisations, to disseminate results and information as broadly as possible. We will specifically target potential funders, as these will be the bodies that will be targeted for the future methodological studies that will be needed to address the final priority list.
Ethics statements
Patient consent for publication
References
Supplementary materials
Supplementary Data
This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.
Footnotes
Twitter @arianymv, @profsandyoliver, @PaulaABRibeiro1, @Daniellep89, @Elie__Akl, @lavisjn, @BraggePeter, @laurenz_ml
Contributors Concept: AMV, GS, JS, PR and SB. Design and methods: AMV, CdW, ACT, SO, JS, PR, DP, EAA, JL, TK, PB, LL, SB. Drafting of the manuscript: AMV and SB. Critical revision of the manuscript for important intellectual content: AMV, GS, CdW, ACT, SO, JS, PR, DP, EAA, JL, TK, PB, LL, SB. Supervision. SB. All the authors read and accept the last version of the protocol.
Funding This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors. AMV is supported by Fonds de recherche du Québec: Santé (FRQS) doctoral scholarship. ACT is supported by a Tier 2 Canada Research Chair in Knowledge Synthesis. SO is supported by a Research Grant to institution, The International Development Research Centre has funded Prof Sandy Oliver for her PEERSS partnership work (https://peerss.org/), in the context of which she contributed to this paper. JNL is support by the Tier 1 Canada Research Chair in Evidence-Support Systems. TK holds a leadership or fiduciary role in the WHO EVIPNet Steering Group. LL holds a leadership or fiduciary role in the South Africa Centre for Evidence, an NGO funded by external grants to support the use of evidence by policy-makers. SLB is supported by the CIHR-SPOR initiative through the Mentoring Chair programme (SMC-151518) and by the FRQS through the Chaire de recherche double en Intelligence Artificielle/Santé Numérique ET sciences de la vie programme (309811).
Competing interests The authors alone are responsible for the views expressed in this paper and they do not necessarily represent the views, decisions or policies of the institutions with which they are affiliated. The authors have no conflicts of interest to declare.
Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.
Provenance and peer review Not commissioned; externally peer reviewed.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.