Last updated: September 27, 2017
Systematic Reviews (quantitative):
Systematic Reviews (qualitative):
This section includes an overview of different forms of synthesis, narrative, quantitative and qualitative. All Systematic Reviews should present some form of narrative synthesis and many will contain more than one of these approaches (e.g. Bowler et al. 2010). It is not the intention here to give detailed guidelines on synthesis methods since each has its own supporting literature. This Section concentrates on how to make decisions on the correct form of synthesis to conduct.
Narrative synthesis is the tabulation and/or visualisation (often with descriptive statistics) of the findings of individual primary studies with supporting text to explain the context. A narrative synthesis is often viewed as preparatory when compared with quantitative synthesis and this may be true in terms of application of analytical rigour and lack of statistical power but narrative synthesis has advantages when dealing with broader questions and disparate outcomes. Often narrative synthesis is the only option when faced with a pool of disparate studies of relatively high susceptibility to bias, but such syntheses also accompany quantitative syntheses in order to provide context and background and help characterise the full evidence base. Some form of narrative synthesis should be provided in any Systematic Review, simply to present the context and overview of the evidence. A valuable guide to the conduct of narrative synthesis is provided by Popay (2006).
Narrative synthesis requires the construction of tables, developed from data coding and extraction forms (see Section 8) that provide details of the study or population characteristics, data quality, and relevant outcomes, all of which should have been defined a priori in the Protocol. Narrative synthesis should include a statement of the measured effect reported in each study and the Review Team’s assessment of study validity (including internal and external validity). Where the validity of studies varies greatly, reviewers may wish to give greater weight to some studies than others. In these instances it is vital that the studies have been subject to standardised a priori critical appraisal with the value judgments regarding both internal and external validity clearly stated. Ideally these will have been subject to stakeholder scrutiny at the Protocol stage. The level of detail employed and emphasis placed on narrative synthesis will be dependent on whether other types of synthesis are also employed. An example of an entirely narrative synthesis (Davies et al. 2006) and a narrative synthesis that complements a quantitative synthesis (Bowler et al. 2010) are available in the CEE Library.
Use of simple vote counting as a form of synthesis (e.g. comparing how many studies showed a positive versus negative or neutral outcome based on statistical significance of the results) should be avoided. Vote counting is misleading because this procedure does not take into account differences in study validity and power. Moreover, vote-counting does not provide an estimate of the magnitude of the effect in question. Whilst tabulation may make it easy for the reader to vote count, the authors should avoid its use in developing and reporting their findings.
Recording of key characteristics of each study included in a narrative synthesis is vital if the Systematic Review is to be useful in summarising the evidence base. Key characteristics are normally presented in tabular form and a minimum list is given below.
It should be noted here that the interpretation of the results provided by the authors of the study is normally not summarised as this could simply compound subjective assessments or decisions.
Usually, when attempting to measure the effect of an intervention or exposure, a quantitative synthesis is desirable. This provides a combined effect and a measure of its variance within and between studies. Quantitative syntheses can be powerful in the sense of enabling the study of the impacts of effect modifiers and increasing power to predict outcomes of interventions or exposures under varying environmental conditions.
Meta-analysis and meta-regression are now commonly used in the environmental sciences and there is a well- developed supporting literature (e.g. Arnqvist & Wooster 1995; Osenberg et al. 1999; Gurevitch & Hedges 2001; Gates 2002; Borenstein et al. 2009; Koricheva et al. 2013) as well as online guidance and training; consequently, we have not provided detailed guidance here. Meta-analysis provides summary effect sizes with each data set weighted according to some measure of its reliability (e.g. with more weight given to large studies with precise effect estimates and less to small studies with imprecise effect estimates). Generally, each study is weighted in proportion to sample size or inverse proportion to the variance of its effect. Meta-regression aims to provide summary effects after adjusting for study-level covariates.
Pooling of individual effects can be undertaken with fixed-effects or random-effects statistical models. Fixed-effects models estimate the combined effect assuming there is a single true underlying effect across the studies, whereas random-effects models assume there is a distribution of effects that depend on study characteristics. Random- effects models include inter-study variability; thus, when there is heterogeneity, a random-effects model usually has wider confidence intervals on its pooled effect than a fixed-effects model (NHS CRD 2001; Khan et al. 2003). Random-or mixed-effects models (containing both random and fixed effects) are often more appropriate for the analysis of ecological data because the numerous complex interactions common in ecology are likely to result in heterogeneity between studies or sites. Exploration of heterogeneity is often more important than the overall pooling from a management perspective, as there is rarely a one-size-fits-all solution to environmental problems.
Relationships between differences in characteristics of individual studies and heterogeneity in results can be investigated as part of the meta-analysis, thus aiding the interpretation of ecological relevance of the findings. Exploration of these differences may be facilitated by construction of tables that group studies with similar characteristics and outcomes together. Important factors that could produce variation in effect size should be defined a priori and their relative importance considered prior to data extraction to make the most efficient use of data. These factors may include differing populations, interventions, outcomes, and methodology. Resulting variation in effect sizes across studies can then be explored by meta-regression.
If sufficient data exist, meta-analyses are often undertaken on subgroups and the significance of differences assessed. Subgroup analyses must be interpreted with caution because statistical power may be limited (Type I errors possible) and multiple analyses of numerous subgroups could result in spurious significance (Type II errors possible). A mixed-effects meta-regression approach might be adopted whereby statistical models including study-level covariates are fitted to the full dataset, with studies weighted according to the precision of the estimate of treatment effect after adjustment for covariates (Sharp 1998).
Despite the attempt to achieve objectivity in reviewing scientific data, considerable subjective judgment is involved when undertaking meta-analyses. These judgements include decisions about choice of effect measure, how data are combined to form datasets, which data sets are relevant and which are methodologically sound enough to be included, methods of meta-analysis, and the issue of whether and how to investigate sources of heterogeneity (Thompson 1994). Reviewers should state these decisions explicitly and distinguish between them to minimise bias and increase transparency.
If possible, a quantitative synthesis should be accompanied by an exploration of possible effects of publication bias. Positive and/or statistically significant results are more readily available than non-significant or negative results because they are more likely published in high-impact journals and in the English language. Whilst searching methodology can reduce this bias, it is still uncertain how influential it might be. There are a number of exploratory plots and tests for publication bias. One example is the funnel plot often accompanied by the Egger (Egger et al. 1997). This approach aims to test for a relationship between the size and precision of study effects, plotted on x- and y-axis of the funnel plot. However, a funnel plot can change greatly depending on the scale of the precision (Lau et al. 2006), and only the trial size is appropriate for effect measures used in ecology, such as the standardised mean difference (SMD) or response ratio. Another approach is to calculate the fail safe number, which is the number of null result studies that would have to be added to a meta-analysis to lower the significance or the magnitude of the effect to a specified level (e.g. where it would be considered statistically or biologically non-significant), but see Scargle (2000). Wherever possible, grey literature and unpublished studies should be included in a meta-analysis to allow direct assessment of publication bias by comparison of effect sizes in published and unpublished studies.
It is common in the social sciences to employ qualitative methods where the views of individual people are recorded in relation to a question. When open ended question are asked and complex answers received, the data are not formally quantified. In such studies the authors are often seeking to characterise the range of views or reactions to a particular question or set of questions. The role of qualitative data synthesis is therefore quite distinct and serves to increase understanding of some environmental issues. Evidence from qualitative studies may help and generate hypotheses that might be further tested by quantitative methods. Qualitative data evidence may also complement quantitative and contribute to a mixed method approach together with quantitative data.
A synthesis of evidence from qualitative research can explore questions such as how do people experience interventions that impact on their environment or social settings (for example, moving from a farming system that uses non-organic pest control to one that employs integrated or organic pest control); why does an intervention work (or not), for whom and in what circumstances? It may be desirable to draw on qualitative evidence to address questions such as what are the barriers and facilitators to accessing given interventions, or what impact do specific barriers and facilitators have on people, their experiences and behaviours
To date qualitative methods have been applied only rarely in environmental evidence synthesis. An example of the use of qualitative synthesis in a CEE review can be found in Pullin et al. (2013). Further information on these methods can be found in Gough et al. (2012) and Noyes et al. (2011).
Narrative synthesis in Systematic Maps is the tabulation and/or visualisation (often with descriptive statistics) of the characteristics of individual primary studies with supporting text to explain the context. Some form of narrative synthesis should be provided in any Systematic Map, simply to present the context and overview of the distribution and abundance of evidence.
Narrative synthesis requires the construction of tables, developed from data coding forms (see Section 8) that provide details of the study or population characteristics and relevant outcomes types, all of which should have been defined a priori in the Protocol. Narrative synthesis should not include a statement of the measured effect reported in each study but the Review Team’s assessment of study validity (including internal and external validity) may be included. The level of detail employed and emphasis placed on narrative synthesis will be dependent on the form of mapping and data visualisation employed.
The process of mapping and presentation of data can take many forms and this guidance does not wish to be overly prescriptive in what is a fast moving field (see James et al 2016 for a detailed discussion of methodologies for the production of Systematic Maps). Presentation of maps can range from a simple spreadsheet format to innovative forms of data visualisation that make the evidence base easier to interrogate and extract information of interest to the user. Good examples of data visualisation are McKinnon et al. (2016) and Haddaway et al. (2014).
Recording of key characteristics of each study included in a narrative synthesis is vital if the Systematic Map is to be useful in summarising the evidence base. Key characteristics stated in the Protocol must be fully presented in at least tabular form. Below is a minimum list of characteristics that will normally be enhanced through data coding of other variables of interest.