Errors – differences between predicted values and actual values, also called residuals – are a key part of statistical models. They form the raw material for various metrics of predictive model performance (accuracy, precision, recall, lift, etc.), and also the basis for diagnostics on descriptive models. A related concept is loss, which is some function of the errors. A common such function is the sum of squared errors, which is minimized in a least-squares regression. In classification models, a loss function might value misclassifications of one type (e.g. “a valid transaction is classified as fraud”) differently from another (“a fraudulent transaction is classified as valid”).