CEE

Eight problems with literature reviews & how to fix them

Written by: Andrew Pullin and Neal Haddaway

Although the CEE mission is to support the conduct of systematic reviews and maps as the highest standard of evidence synthesis to inform environmental decisions, it also recognises its responsibility as a supporter of all those who want to improve standards of evidence reviews.

CEE has recently launched a service for decision makers and other evidence users called CEEDER (http://www.environmentalevidence.org/ceeder) that provides a comprehensive database of recently published reviews (some 700 to date) that claim to provide an estimate of effect of an environmental intervention or impact on the environment of exposure to human activity. The database contains a critical appraisal of the reliability of each review and the overall impression is clear. In terms of conduct, transparency of reporting and replicability of methodology, a substantial majority of reviews don’t meet the standards expected. Traditional ways of reviewing the literature may be susceptible to bias and end up giving us incorrect conclusions. It seems the environmental science and policy community has dropped the ball on developing basic standards of evidence synthesis in the sector. This is of particular concern when reviews claim to address key policy- and practice-relevant questions. The good news is that there are simple steps that authors can take (and editors and peer reviewers can advocate) to improve the reliability of evidence reviews.

In a recent paper in Nature Ecology and Evolution, Haddaway et al outline 8 major problems that can occur with traditional ways of reviewing the literature, and provide concrete advice on how to avoid them, some not prohibitively costly. https://rdcu.be/b8pp0 

Here is a summary of the 8 problems with traditional literature reviews, each with suggested solutions.

Problem 1: Lack of relevance – limited stakeholder engagement can produce a review that is of limited practical use to decision-makers

Solution 1: Stakeholders can be identified, mapped and contacted for feedback and inclusion without the need for extensive budgets – check out best-practice guidance:  https://stakeholdersandsynthesis.github.io/  

Problem 2: Mission creep – reviews that don’t publish their methods in an a priori protocol can suffer from shifting goals and inclusion criteria.

Solution 2: carefully design and publish an a priori protocol that outlines planned methods for searching, screening, data extraction, critical appraisal and synthesis in detail. Make use of existing organisations to support you (e.g. @EnvEvidence;  http://www.environmentalevidence.org/guidelines/section-4)

Problem 3: A lack of transparency/replicability in the review methods may mean that the review cannot be replicated (a central tenet of the scientific method!)  https://onlinelibrary.wiley.com/doi/10.1002/ece3.1722 

Solution 3: Be explicit, and make use of high-quality guidance and standards for review conduct (e.g. CEE Guidance,  http://www.environmentalevidence.org/information-for-authors) and reporting (PRISMA,  http://www.prisma-statement.org/  or ROSES,  http://www.roses-reporting.com).

Problem 4: Selection bias (where included studies are not representative of the evidence base) and a lack of comprehensiveness (an inappropriate search method) can mean that reviews end up with the wrong evidence for the question at hand.  https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/s12874-018-0599-2

Solution 4: Carefully design a search strategy with an information specialist; trial the search strategy (against a benchmark list); use multiple bibliographic databases/languages/sources of grey literature; publish search methods in an a priori protocol for peer-review.

Problem 5: The exclusion of less-available commercially published or ‘grey literature’ and failure to test for evidence of publication bias can result in incorrect or misleading conclusions.  https://onlinelibrary.wiley.com/doi/abs/10.1002/jrsm.1433

Solution 5: Search multiple databases and include attempts to find grey literature, including both ‘file-drawer’ (unpublished academic) research and organisational reports. Test for evidence of potential for publication bias. https://www.sciencedirect.com/science/article/abs/pii/S0006320715300689 

Problem 6: Traditional reviews often lack appropriate critical appraisal of included study validity, treating all evidence as equally valid – we know some research is more valid and we need to account for this in the synthesis.

Solution 6: Carefully plan and trial a critical appraisal tool before starting the process in full, learning from existing robust critical appraisal tools https://methods.cochrane.org/bias/risk-bias-non-randomized-studies-interventions

Problem 7: Inappropriate synthesis (e.g. using vote-counting and inappropriate statistics) can negate all of the preceding systematic effort. Vote-counting (tallying studies based on their statistical significance) ignores study validity and magnitude of effect sizes.

Solution 7: Select the synthesis method carefully based on the data analysed. Vote-counting should never be used instead of meta-analysis. Formal methods for narrative synthesis should be used to summarise and describe the evidence base. https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.178.3100&rep=rep1&type=pdf

Problem 8: A lack of consistency and error checking (working individually) can introduce errors and biases if a single reviewer makes decisions without consensus.  https://onlinelibrary.wiley.com/doi/abs/10.1002/jrsm.1369 

Solution 8: Have two reviewers screen at least a subset of the evidence base to ensure consistency and shared understanding of the methods before proceeding. Ideally have two reviewers conduct all decisions independently and then consolidate.

Summary: There is a lack of awareness and appreciation of the methods needed to ensure reviews are as free from bias and as reliable as possible: demonstrated by recent, flawed, high-profile reviews and the many reviews that incorrectly claim to be systematic reviews.

We call on review authors to use the available online resources to conduct more rigorous reviews, and on editors and peer-reviewers to use these same resources to ‘gate-keep’ more strictly, and the community of methodologists to better support the broader research community.

Only by working collaboratively can we build and maintain a strong system of rigorous, evidence-informed decision-making in environmental management.

Follow Us On Twitter Connect On Linkedin Make A Donation