13: How do you know if an intervention is successful?
This chapter discusses how inferential statistics can help distinguish the effect size of an intervention. Different techniques, such as simple differences, odds ratios, and percentage changes are covered. The Kansas City Gun Experiment is used as an illustrative example. The chapter then moves to a discussion of statistical significance and why it is more useful to understand confidence intervals than p-values. Concrete examples are used to demonstrate confidence intervals. The chapter concludes with a discussion of practical significance.
Glossary terms in this chapter
Statistical significance: In grossly simplified terms, a statistically significant result is unlikely to have been a fluke. A measure of confidence that any difference between your treatment and control groups was a real difference, and not the consequence of a skewed sample, or simple chance.
Effect size: The effect size is a measure of how much your intervention changed things. It is a quantitative estimation of the magnitude of the experimental effect.
Odds ratio: The odds ratio explains the difference between the treatment and control group outcomes in the metric of the study.
Descriptive statistic: A descriptive statistic tells you about the data, such as the average or the range of values.
Inferential statistics: Inferential statistics help generalize something meaningful about a larger population by inferring from a smaller sample. They can test whether an intervention’s effectiveness was more than just a fluke, and whether the intervention could be generalized to the wider population.
Null hypothesis: The status quo, the business-as-usual case, whatever that is. It is the default position that your intervention did not have an effect. Formally, there is no significant difference between specified populations, any observed difference being due to sampling or experimental error.
Confidence interval: The confidence interval is the range of values on the outcome that most likely contains the actual population value. Confidence intervals incorporate the margin of error for your estimated effect.
Practical significance: The practical significance of an intervention is an assessment of not only whether the effect was unlikely to have occurred by chance, but also that the magnitude of the effect justified the effort and impact of the intervention.
Additional information and links
While written for Python users, a page from data scientist Angel Das is a nice overview of descriptive statistics that are useful to crime and policing researchers.
If you want a short but slightly more technical description of both the odds ratio and the confidence interval, there is a short article available from the Journal of the Canadian Academy of Child and Adolescent Psychiatry, of all places, that describes odds ratio and confidence intervals clearly.
The ABC spreadsheet, available from the Reducing Crime website, calculates the odds ratio, as well as confidence intervals and many other calculations for crime scientists.
In Box 13‑1 A quirk regarding the odds ratio, the article where David Wilson asks The relative incident rate ratio effect size for count-based impact evaluations: When an odds ratio is not an odds ratio is abstracted (and available to university subscribers).
If you are a scientist an knowing more about Box 13‑2 (Why are you not reading about p-values?), then an interesting article from Nature explains some of the issues around the use, and misuse, of the p-value.
Figure 13-4 shows an graphic that is replicated in this Excel spreadsheet detail.
In video 13a (Odds ratio) that is available to instructors in support of the book, the example used in the video relates to Ratcliffe, JH & Breen, C (2011) Crime diffusion and displacement: Measuring the side effects of police operations, Professional Geographer, 63(2): 230-243. The full article is available here.
Related Reducing Crime podcast episode
Difficult to pick a podcast episode for this chapter, because Reducing Crime does not have a podcast that really covers the statistics side of evidence-based policing. Can imagine how exciting that would be as a listener? So instead of a specific tie-in to the book chapter, this seems a suitable place to have my interview with the guru of evidence-based policing, Larry Sherman.
Professor Lawrence Sherman is Director of the Cambridge Centre for Evidence-Based Policing at the University of Cambridge and Director of the Jerry Lee Centre of Experimental Criminology. We discuss the police constable apprentice program, the role of socializing in the pub as an executive learning tool, the crime harm index and victimization, and the role of algorithms in improving the criminal justice system.