Article Text
Abstract
Objective The aim of this study was to examine the epidemiological and reporting characteristics as well as the methodological quality of meta-analyses (MAs) of observational studies published in Chinese journals.
Methods 5 Chinese databases were searched for MAs of observational studies published from January 1978 to May 2014. Data were extracted into Excel spreadsheets, and Meta-analysis of Observational Studies in Epidemiology (MOOSE) and Assessment of Multiple Systematic Reviews (AMSTAR) checklists were used to assess reporting characteristics and methodological quality, respectively.
Results A total of 607 MAs were included. Only 52.2% of the MAs assessed the quality of the included primary studies, and the retrieval information was not comprehensive in more than half (85.8%) of the MAs. In addition, 50 (8.2%) MAs did not search any Chinese databases, while 126 (20.8%) studies did not search any English databases. Approximately 41.2% of the MAs did not describe the statistical methods in sufficient details, and most (95.5%) MAs did not report on conflicts of interest. However, compared with the before publication of the MOOSE Checklist, the quality of reporting improved significantly for 20 subitems after publication of the MOOSE Checklist, and 7 items of the included MAs demonstrated significant improvement after publication of the AMSTAR Checklist (p<0.05).
Conclusions Although many MAs of observational studies have been published in Chinese journals, the reporting quality is questionable. Thus, there is an urgent need to increase the use of reporting guidelines and methodological tools in China; we recommend that Chinese journals adopt the MOOSE and AMSTAR criteria.
- EPIDEMIOLOGY
- MEDICAL JOURNALISM
- QUALITATIVE RESEARCH
This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/
Statistics from Altmetric.com
Strengths and limitations of this study
Our study was the first to examine the compliance of Chinese observational study meta-analyses using the Meta-analysis of Observational Studies in Epidemiology reporting guidelines and the Assessment of Multiple Systematic Reviews tool for assessing methodological quality.
This study included a comprehensive literature search using five Chinese databases to ensure a high degree of representativeness.
In addition, this study included only meta-analyses published in Chinese journals, whereas Chinese investigators increasingly publish articles in international journals.
Introduction
Meta-analysis as a statistical and scientific tool has grown immensely popular over the past decade.1 Several studies have considered that meta-analyses including only randomised controlled trials (RCTs) would provide stronger evidence than those not including RCTs.2 ,3 However, in many situations, randomised controlled designs are not feasible and only data from observational studies are available. Therefore, observational studies have an important role in answering questions related to treatment effectiveness and disease aetiology.
Owing to the lack of randomisation, observational studies are inherently more prone to potential biases.4 ,5 For instance, case–control studies are always retrospective in nature, which increases the potential for incomplete and biased data collection. Therefore, it is more important to describe exactly the methodology that led to the generation of results from meta-analyses of observational studies.
The Meta-Analysis of Observational Studies in Epidemiology (MOOSE) checklist and the Assessment of Multiple Systematic Reviews (AMSTAR) tool were first introduced and published in China in 2010.6 ,7 Over the past decades, many studies have described the quality and reporting characteristics in multidisciplinary clinical research topics, but these studies did not include information about epidemiological characteristics or methodological quality based on the meta-analyses of observational studies in China.8–10 The aim of this study is to describe the epidemiological and reporting characteristics, as well as the methodological quality, of the meta-analyses of observational studies published in Chinese journals, using the most up-to-date assessment tools.
Methods
Data sources and searches
Five Chinese databases (Chinese Biomedical Literature database (CBM), Chinese Science Citation Database (CSCD), VIP information (Chinese Scientific Journals database), China National Knowledge Infrastructure (CNKI) and WANFANG database (Chinese Medicine Premier)), were searched from inception through May 2014 (see online supplementary file 1). The search terms included ‘review’, ‘meta-analysis’, ‘systematic review’, ‘pooled analysis’, ‘overview’, ‘cohort’, ‘case control’ and ‘cross sectional’. The search was limited to the following criteria: MAs of the article type and one of three main study designs, including cohort, case–control and cross-sectional. The search was limited to human studies. Editorials, letters, conferences and meeting abstracts were excluded. Then, the full texts of the potentially eligible studies were retrieved and further evaluated. The references of retrieved articles were also searched.
Data collection and analysis
Study reports were grouped according to the year that the two checklists were introduced in China: 2009 and earlier (prepublication) or 2010–2014 (postpublication). Articles were scored as ‘yes’ if they were reported in enough detail to allow the reader to judge that the definition had been met. An article was scored as ‘partially/cannot tell’ only when the report was incomplete or unclear. Articles were coded as ‘no’ when the checklist item was not reported. We also collected information regarding the risk of bias tools and methods used to search Chinese journals.
To enhance the reviewers’ inter-rater agreement, we evaluated 20 papers (not included in the study sample) in a pilot test of the database prior to starting the data abstraction process. Proper scoring of each item in the database was discussed in detail. The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guideline was also followed (see online supplementary file 2). Z-wZ and JC searched the literature. ZL, J-cM JC, J-lL and JW participated in data extraction and quality assessment of the MAs, with guidance from K-hY. Intraclass correlation coefficients (ICCs) were used to assess inter-rater reliability within each item.11 The χ2 test was used to compare the quality of MAs published in journals cited by CSCD and non-CSCD. Statistical significance was considered when p<0.05. Data analysis was performed with SPSS V.13.0.
Results
Search
A total of 2930 potentially relevant reports from the databases were identified for review. The screening process excluded 1977 reviews due to duplication or the absence of MAs. Another 346 reviews that were not MAs of observational studies were excluded after examination of the full texts. Finally, a total of 607 MAs were considered to be eligible for our study (figure 1 and online supplementary file 3).
Flow chart of systematic search.
Descriptive characteristics
The first related methods of MAs concerning observational studies were published in China in 1995, and the overall number of published MAs has subsequently increased. The 607 MAs included were published in 265 different Chinese journals. Less than one-third (28.5%) of the MAs were supported by the government. The most common conditions studied included diseases of neoplasms (43%) and the circulatory system (17%). The number of authors ranged from 1 to 11 with a median of four authors. Less than one-fifth (18.9%) of the MAs were cited by the CSCD. In addition, 85% of the articles included the term ‘meta-analyses’ in the title. None of the MAs had been updated from a previous review (table 1).
Descriptive characteristics of included MAs
Risk of bias instruments
Only 52.2% of the MAs reported that they assessed the quality of the included primary studies. Of these, 39 (6.3%) MAs used the Newcastle-Ottawa scale (NOS); 19 (3.1%) MAs used the critical appraisal skill programme (CASP); 12 (2%) MAs used the Cochrane Collaboration scale (CC); 29 (4.8%) MAs used the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE); and 218 (36%) MAs presented either the reference for the scale or used unnamed scales. Among the 607 included MAs, 548 (90.3%) provided the name and version of the statistical software employed, including particulars of any special features used (table 2).
Tools of quality assessment in included meta-analyses
Searching details for studies
Within the included studies, the median number of databases used was 4, with a range of 0 to 16. Regarding the Chinese-language databases, the most commonly searched database was CNKI (65.7%), followed by CBM (51.7%) and VIP (49.4%). PUBMED was the most commonly searched English-language database (66%); the second most common database was EMBASE (28.7%), followed by the Cochrane Library (16.8%). A total of 50 (8.2%) MAs did not search any Chinese database, and 126 (20.8%) MAs did not search any English language database. In addition, 559 (92.1%) revealed the search terms (some terms given but not all), but only 87 (14.3%) studies presented the search strategy (search terms and Boolean operators; table 3).
Search details reported by included meta-analyses
AMSTAR checklist (current edition) assessment
Table 4 shows the summary of results for the risk of bias of all MAs. Compliance with the AMSTAR checklist items ranged from 4.5 to 75.8. The overall agreement among reviewers for evaluation using the AMSTAR Checklist was moderate (ICC=0.81; 95% CI 0.71 to 0.89). Six AMSTAR items (1, 2, 3, 4, 5 and 11) were reported in less than 50% of the total reports. No significant difference was found for the MAs published in journals cited by CSCD versus non-CSCD. Compared to studies published before 2010, there was an increase in seven items (1, 2, 3, 5, 6, 7 and 10) on the AMSTAR checklist (p<0.05; table 4).
AMSTAR assessment of methodological characteristics (n=607)
MOOSE checklist (current edition) assessment
Table 5 shows the proportion of all MAs reporting each item in the MOOSE checklist. Compliance with the MOOSE checklist items ranged from 0% to 96.7%. The overall agreement among reviewers for evaluation with the MOOSE checklist was also moderate (ICC=0.79; 95% CI 0.68 to 0.87). Fourteen MOOSE checklist subitems (2, 5, 6, 7, 8, 9, 12, 13, 14, 15, 16, 20, 29 and 35) were mentioned in less than 50% of the total reports, and four of these subitems (7, 13, 14 and 16) were included in less than 10% of the reports. There was also no statistically significant difference in the source of journals cited by CSCD versus non-CSCD. In addition, the quality of reporting demonstrated significant improvement regarding the background (item 3), search strategy (items 7, 8, 9, 10, 12, 14 and 15), methods (items17, 18, 19, 20, 21, 22 and 24), results (items 25, 27 and 28) and discussion (items 30 and 31). However, no study provided the name and version of the search software employed (subitem 11), and there was no mention of the special features used (table 5).
MOOSE assessment of reporting characteristics (n=607)
Discussion
Our study shows that large numbers of MAs of observational studies have recently been conducted, with 607 publications identified in Chinese journals. This study was the first to examine the compliance of Chinese observational study MAs using the MOOSE reporting guidelines and the AMSTAR tool for assessing methodological quality.
This study found that the methodological quality of Chinese MAs is poor. In particular, we found the retrievals were not comprehensive and lacked bias assessments in the majority of the MAs that we examined. Reporting the details of the search strategy is a requirement for MAs, as this information facilitates an assessment of comprehensiveness and ensures reproducibility.12 This study demonstrated that 85.8% of the MAs examined did not perform comprehensive literature searches; for example, only 14.3% of the studies presented their search strategy; 15.7% of the studies included searches of grey literature; and 67.7% of the studies used manual retrieval. Moreover, the lack of detailed retrieval strategies and qualifications of the searchers (ie, librarians and investigators) should also be noted. Ma et al13 reported that 59.1% of Chinese SRs of acupuncture interventions published in Chinese journals did not perform comprehensive literature searches and that 97.7% did not include searches of grey literature or ongoing studies. Moreover, the lack of a comprehensive search was clearly the weakest item in the identified MAs in Chinese journals.
Risk of bias is important because poor methodological quality can lead to a biased estimate. In the present research, nearly one-half of the studies did not mention how the quality of included primary studies was assessed. In addition, 29 (4.8%) studies used the STROBE criteria. However, it should be noted that the STROBE criteria were not developed as a tool for assessing the quality of published observational studies; instead, the STROBE criteria were developed solely to provide guidance on how to report observational research.14 Similarly, Bruno et al 15reported that about half of the systematic reviews and meta-analyses used STROBE inappropriately, as a methodological quality assessment tool. In some instances, specific checklists for observational studies were used, including the NOS and CASP, which have been shown to be generally useful for assessing the quality of non-randomised studies despite some limitations.16 ,17 These assessments serve to identify the strengths and limitations of included studies, including the quality of strength of the evidence for a given outcome. The NOS has been endorsed for use in systematic reviews of non-randomised studies by the Cochrane Collaboration, specifically for cohort and case–control studies. CASP is an instrument for the appraisal of systematic reviews based on 10 questions for addressing the key components of methodological quality. Therefore, to obtain valuable findings from observational study MAs, adequate assessment based on the correct study design is essential.
In addition, many studies did not report key aspects of MAs methodologies, which reduces confidence in the results and impairs the conclusion. For example, more than half of the studies reported an ‘a priori’ design, and another 11.5% of the studies did not reveal their design information. The most common means of assessing publication bias was by funnel plot, and more than one-third of the studies did not consider or assess publication bias despite considerable evidence for its existence and its potential influence on the MA results. Only 4.5% of the studies stated conflicts of interest; for example, Barnes and Bero18 reported that funding sources may have influenced the outcomes and quality of the research. These important methodology components must be considered in future research.
Accurate reporting is essential to maintain a clear scientific record, which can then be used for the synthesis of existing evidence, clinical decision-making and health policy determination. Groenwold et al19 reported that the quality of reporting on confounding in observational studies was rather poor, even in high-impact general medical journals. Our studies showed that less than 50% of the included studies assessed confounding. As it cannot be guaranteed that known and unknown confounding factors are distributed equally among the observation groups, results of this type are susceptible to distortions. Therefore, clinicians reading the reports of MAs must be able to appraise the method and validity of the study to confidently interpret the results.
As mentioned above, we found that the quality of reporting regarding search strategies and methods significantly improved after publication of the MOOSE checklist. However, this observation is prone to many biases and could simply represent improvements in research methods over time. Nonetheless, room for improvement still exists. For example, approximately one-half of the studies did not present risk bias assessment results, which could have affected the cumulative evidence. This is despite the fact that many studies have previously shown the importance of assessing bias heterogeneity across studies.20 ,21 Disappointingly, 41.2% of the studies did not describe the statistical methods in sufficient detail; in fact, some of the studies did not explore the reasons for statistical heterogeneity and simply pooled results using a random effects model to account for heterogeneity. These shortcomings may have led to incorrect or inappropriate interpretations of the results.
Panic et al22 reported that the endorsement of PRISMA resulted in increase of both quality of reporting and methodological quality. Our studies showed that less than one-fifth of the included studies were indexed in the Chinese Science Citation Database (CSCD), which is similar to the Science Citation Index. The reason may be the overall poor quality of work and many deficits in reporting in the same field in the Chinese MAs. Therefore, broader promotion of methodological quality guidelines is a necessary step in enhancing dissemination and implementation of AMSTAR and MOOSE.
The strengths of this study include its comprehensive literature search using five Chinese databases, to ensure a high degree of representativeness. In addition, both the eligibility process and data extraction were conducted by two independent investigators, with a third investigator providing quality evaluation. Nonetheless, there were some limitations in our current study. First, in this study, the terms ‘meta-analysis’, ‘systematic review’ and ‘pooled analysis’ were used, although some potentially eligible MAs may not have included these terms in their publications. Second, this study included only MAs published in Chinese journals, whereas Chinese investigators increasingly publish articles in international journals. Third, our studies relied on reporting from authors, and it is possible that the authors may have omitted important details from their reports or that the peer-review process resulted in the removal of key information from these reviews.
Conclusion
The goal of the present study was to provide readers with a broad overview of the reporting and methodological characteristics of published Chinese observational study MAs. Although many such MAs have been published, the quality of these MAs is troubling. Thus, the reporting guidelines and methodological tools should be used to improve the quality of future MAs.
References
Supplementary materials
Supplementary Data
This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.
- Data supplement 1 - Online PRISMA Checklist
- Data supplement 2 - Online PRISMA Flow-Diagram
- Data supplement 3 - Online Search-strategy
Footnotes
Z-wZ and JC contributed equally to this work.
Contributors Z-wZ and H-hY contributed to the design and implementation of the study. Z-wZ and JC searched the literature. ZL, J-cM, J-lL and JW participated in data extraction and quality assessment of the MAs, with guidance from K-hY. All the authors participated in data interpretation. Z-wZ and K-hY wrote the first draft of the report and all the other authors commented on the draft and approved the final version.
Funding This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Provenance and peer review Not commissioned; externally peer reviewed.
Data sharing statement No additional data are available.