Mona Ghannad presents a systematic review which documents and classifies spin or overinterpretation, as well as facilitators of spin, in recent clinical studies evaluating performance of biomarkers in ovarian cancer. Video published on 11 Oct 2017. Reference.
The objective of this systematic review was to document and classify spin or overinterpretation, as well as facilitators of spin, in recent clinical studies evaluating performance of biomarkers in ovarian cancer.
We searched PubMed systematically for all studies published in 2015. Studies eligible for inclusion described 1 or more trial designs for identification and/or validation of prognostic, predictive, or diagnostic biomarkers in ovarian cancer. Reviews, animal studies, and cell line studies were excluded. All studies were screened by 2 reviewers. To document and characterize spin, we collected information on the quality of evidence supporting the study conclusions, linking the performance of the marker to outcomes claimed.
In total, 1026 potentially eligible articles were retrieved by our search strategy, and 345 studies met all eligibility criteria and were included. The first 200 studies, when ranked according to publication date, will be included in our final analysis. Data extraction was done by one researcher and validated by a second. Specific information extracted and analyzed on study and journal characteristics, key information on the relevant evidence in methods, and reporting of conclusions claimed for the first 50 studies is provided here. Actual forms of spin and facilitators of spin were identified in studies trying to establish the performance of the discovered biomarker.
Actual forms of spin identified as shown (Table) were:
- other purposes of biomarker claimed not investigated (18 of 50 studies [36%]);
- incorrect presentation of results (15 of 50 studies [30%]);
- mismatch between the biomarker’s intended clinical application and population recruited (11 of 50 studies [22%]);
- mismatch between intended aim and conclusion (7 of 50 studies [14%]);
- and mismatch between abstract conclusion and results presented in the main text (6 of 50 studies [12%]).
Frequently observed facilitators of spin were:
- not clearly prespecifying a formal test of hypothesis (50 of 50 studies [100%]);
- not stating sample size calculations (50 of 50 studies [100%]);
- not prespecifying a positivity threshold of continuous biomarker (17 of 43 studies [40%]);
- not reporting imprecision or statistical test for data shown (ie, confidence intervals, P values) (12 of 50 studies [24%]);
- and selective reporting of significant findings between results for primary outcome reported in abstract and results reported in main text (9 of 50 studies [18%]).
Spin was frequently documented in abstracts, results, and conclusions of clinical studies evaluating performance of biomarkers in ovarian cancer. Inflated and selective reporting of biomarker performance may account for a considerable amount of waste in the biomarker discovery process. Strategies to curb exaggerated reporting are needed to improve the quality and credibility of published biomarker studies.