Supervised machine learning
In the “cluster of six”, we used unsupervised machine learning, to reveal hidden structure in unlabelled data, and analyse the voting patterns of Labour Members of Parliament. In this blog post, we’ll use supervised machine learning to see how well we can predict crime in London. Perhaps not specific crimes. But we can use recorded crime summary data at London borough-level , non-personal aggregated data licensed under the Open Government Licence, to predict crime counts.
Along the way, we’ll see the pay-off from an exploration of multiple models.
Why might one want to predict crime counts? Perhaps we have the responsibility to deploy, or plan a budget for, police resources. Maybe we are considering where we might invest in additional CCTV infrastructure. Robust models can support decision-making by providing predictions grounded in facts, and are especially useful where complexity in the data is otherwise harder to unpick.
No “one size fits all”
There are a range of modelling techniques available. At one end of the spectrum, we have the more intuitive and interpretable models: Given conditions A and B, then we can anticipate outcome X. At the other end, we have more powerful and complex models where one accepts the hidden nature of their inner machinery in exchange for potentially greater predictive power.
There is no “one size fits all”, and the predictive power of each model will vary depending on the data being modelled. That alone is a good reason to consider multiple models. And I don’t mind admitting that I encountered another very good reason whilst preparing my five-model analysis for this post.
I almost abandoned the model that ultimately delivered the lowest prediction error. It was because the other models were generating stronger predictions that I questioned my execution of the fifth. The fuller, but by no means exhaustive methodology, including code, is available here.
Preliminary exploration
Building familiarity with the data is an important first step. We’ll begin with 32 mini-plots; one for each London borough. Within each are the crime trends across nine major crime categories. What does this tell us?
Borough is likely to be a key predictor given the considerable variation in crime counts associated with this categorical variable. Contrast, for example, the vertical scaling for “Westminster” with “Sutton”, in the bottom-right corner.
Major crime category will also likely be a key predictor, with “theft & handling”, and “violence against the person”, associated with significantly more crime across all London boroughs.
There is also a possible interplay between borough and crime category which we may need to account for in models sensitive to interaction. This is evident where more affluent boroughs, or those attracting more visitors, such as “Kensington & Chelsea”, and “Westminster”, have significantly higher counts for “theft & handling”. Contrast these boroughs with, for example, “Lewisham”, where “violence against the person” plays a more significant role.
A summary of each potential predictor also exposes their possible influence, for example, the growth in crime count over time. (I may dedicate a future post to time-series forecasting.)
Do all four potential predictors matter?
One way to address this question is to use recursive partitioning to create a tree diagram. At the top of the tree we have 100% of the over-eleven-thousand observations. The first, and most important split, is based on the major crime category: 23% of the observations are partitioned off to the right (to node 3), for “theft and handling” (abbreviated as T&H) and “violence against the person” (VATP), with the balance branching left.
Similarly, borough appears early in the recursive partitioning where node 3 splits based on this variable.
We could go to lower levels of granularity, but our purpose here is a preliminary assessment of the most important variables. This shows that month is of lesser significance. However, we’ll keep it in our initial modelling to see if it’s significant enough to influence our models’ predictive power.
Training and testing our models
Cross validation is a comparatively simple and popular method for estimating errors in predictions. We’ll use repeated cross-validation to train the models on randomly-selected cuts of the data, and validate them on the remaining cut. This approach is designed to strengthen the models’ ability to perform well on as-yet-unseen observations.
There are many modelling choices we can make to enhance their performance, for example: The initial selection of models; how we pre-process the data; and how we utilise tuning parameters to optimise their performance. The choices made for this post, and I by no means explored every possibility, are discussed in the supporting documentation.
For the purposes of this article, we’ll jump to assessing the models’ predictions versus the known actuals to see how they performed.
Comparing predictive power
Optimal predictions sit close to, or on, the dashed line in the graphic below, i.e. where the prediction for each observation equals the actual. The Root Mean Squared Error (RMSE) measures the average differences, so should be as small as possible. And R-squared measures the correlation between prediction and actual, where 0 reflects no correlation, and 1 perfect positive correlation.
Our supervised machine learning outcomes from the CART and GLM models have weaker RMSEs, and visually exhibit some dispersion in the predictions at higher counts. Stochastic Gradient Boosting, Cubist and Random Forest have handled the higher counts better as we see from the visually tighter clustering.
It was Random Forest that produced marginally the smallest prediction error. And it was a parameter unique to the Random Forest model which almost tripped me up as discussed in the supporting documentation.
The moral of the story reinforces the value of exploring multiple models. One can’t be certain which is best adapted to the data in hand. And model comparison also provides a very helpful check and balance from which the ultimate outcome may be all the stronger.
R toolkit
Packages | Functions | |
---|---|---|
purrr | map[2]; map_dfr[1]; set_names[1] | |
furrr | future_map2_dfr[1] | |
future | plan[1] | |
doParallel | registerDoParallel[1] | |
parallel | makeCluster[1]; stopCluster[1] | |
readr | read_csv[1]; read_lines[1] | |
dplyr | mutate[6]; filter[5]; if_else[5]; tibble[4]; arrange[2]; count[2]; group_by[2]; rename[2]; select[2]; summarise[2]; as_tibble[1]; desc[1]; top_n[1] | |
tidyr | gather[2]; separate[1]; unnest[1] | |
stringr | str_c[4]; str_replace[2]; fixed[1]; str_count[1]; str_detect[1]; str_pad[1]; str_remove[1]; str_remove_all[1]; str_replace_all[1] | |
forcats | fct_inorder[1] | |
rebus | lookahead[3]; whole_word[2]; lookbehind[1]; one_or_more[1] | |
tibble | enframe[1]; rownames_to_column[1] | |
rpart | rpart[1] | |
rpart.plot | prp[1] | |
caret | train[7]; trainControl[1]; varImp[1] | |
broom | tidy[1] | |
modelr | spread_predictions[2]; gather_residuals[1]; rmse[1]; rsquare[1] | |
base | c[13]; library[13]; expand.grid[5]; factor[5]; seq[5]; function[4]; list[3]; paste0[3]; round[3]; subset[2]; sum[2]; as.data.frame[1]; as.integer[1]; conflicts[1]; cumsum[1]; max[1]; min[1]; Negate[1]; search[1] | |
ggplot2 | element_text[10]; element_rect[8]; element_blank[7]; theme[7]; ggplot[6]; ggtitle[5]; aes[4]; geom_abline[4]; labs[4]; facet_wrap[3]; geom_point[3]; scale_x_continuous[3]; geom_col[2]; geom_text[2]; guide_legend[2]; guides[2]; scale_y_continuous[2]; scale_y_log10[2]; aes_string[1]; coord_cartesian[1]; coord_flip[1]; geom_boxplot[1]; geom_hline[1]; geom_jitter[1]; geom_line[1]; geom_smooth[1]; scale_x_discrete[1] | |
ggthemes | scale_colour_economist[4]; theme_economist[4]; economist_pal[1] | |
gridExtra | grid.arrange[1] | |
kableExtra | kable[1]; kable_styling[1] |
View the code here.
That is some some very enjoyable read. Also, thanks for making me discover the Economist them, I love it!
Thanks for the comments Victor.
Some remarkable R code. I am trying to learn how you did it but I cannot repro the ggplot facet_wrap plot. Using the same code, the line plots are squished and not possible to read. Any ideas?
In R Markdown this code chunk begins with
```{r Explore detail, fig.align='center', fig.height=20, fig.width=9}
to give the plot sufficient space. Alternatively, if you are using RStudio, you could similarly adjust the plot height and width usingexport
andsave as image
in the Plots tab.Thank you so much! Again, some of your code is really terrific and I am learning so much! Thank you!
Just in case someone else hits a problem when running the RF models:
I had to install the latest ranger and lattice packages. I also updates to the latest caret package on CRAN. (The GitHub version bombed on me.)
Otherwise, this error was produced:
Error: The tuning parameter grid should have columns mtry, splitrule