Register today for our Generative AI Foundations course. Use code GenAI99 for a discount price of $99!
Skip to content

Alaskan Generosity

People in Alaska are extraordinarily generous – that’s what a predictive model showed, when applied to a charitable organization’s donor list. A closer examination revealed a flaw – while the original data was for all 50 states, the model’s training data for Alaska included donors, but excluded non-donors. The reason?

The data was 99% non-donors, and predictive models like to work with more balanced data. (This exampleis from the excellent blogs at Elder Research.) So the analyst used a standard technique – using all the donors, but downsampling the prevalent non-donor class so it was more in line with the number of donors. However, rather than downsampling randomly, the analyst did so by ordering the list by zipcode and moving down the list, selecting every nth case. The selection quota ended up sampling from all but one state – it failed to reach Alaska, whose zipcodes all begin with 99. As a result, there were no non-donors from Alaska represented in the data, only donors. The CART (decision tree) algorithm thus found an excellent rule for dividing donors from non-donors: if person is from Alaska, classify as donor. In case you were wondering, the Chronicle of Philanthropy, in 2012, ranked Alaska #28 in charitable giving.

This is an obvious example of bias in the selection of a sample, easily corrected. But selection bias makes its way into analysis in far more subtle, unconscious and sometimes far-reaching ways. It is one of the factors that lies behind the reproducibility crisis in scientific research discussed by John Ioniddis in his aptly named paper Why Most Published Research Findings Are False.

The problem arises when the classic order of knowledge discovery is reversed. In a classical experiment, we have:

form hypothesis > collect data > confirm or reject hypothesis

But setting up an experiment and then collecting data is difficult, and, in some cases, impossible. An alternative is to consult pre-existing data, which often leads to a reversal of the order:

explore data > find something interesting > form and confirm hypothesis

With some reflection on the prevalence of random variability and noise, you can see that it is easy to fool yourself with this approach – if you look often and hard enough, you can find all manner of seemingly interesting phenomena lurking in the chance patterns of data. Or, in the words of the economist Ronald Coase:

“If you torture the data long enough, it will confess.”