Register today for our Generative AI Foundations course. Use code GenAI99 for a discount price of $99!
Skip to content

Maximum Likelihood Estimator

Maximum Likelihood Estimator: The method of maximum likelihood is the most popular method for deriving estimators - the value of the population parameter T maximizing the likelihood function is used as the estimate of this parameter. The general idea behind maximum likelihood estimation is to find the population that is...

View Full Description

Multiple Least Squares Regression

Multiple Least Squares Regression: Multiple least squares regression is a special (and the most common) type of multiple regression . It relies on the least squares method to fit the regression model to the data. See also: ordinary least squares regression . Browse Other Glossary Entries

View Full Description

Multiple Regression

Multiple Regression: Multiple (linear) regression is a regression technique aimed at finding a linear relationship between the dependent variable and multiple independent variables. (See regression analysis.) The multiple regression model is as follows:   Yi = B0 + B1 X1i + B2 X2i + ¼+ Bm Xmi + Ei,     i=1,¼,N,...

View Full Description

Non-parametric Regression

Non-parametric Regression: Non-parametric regression methods are aimed at describing a relationship between the dependent and independent variables without specifying the form of the relationship between them a priori. See also: Regression analysis Browse Other Glossary Entries

View Full Description

Ordinary Least Squares Regression

Ordinary Least Squares Regression: Ordinary least squares regression is a special (and the most common) kind of ordinary linear regression . It is based on the least squares method of finding regression parameters. Technically, the aim of ordinary least squares regression is to find out those values [^a] and [^b]...

View Full Description

Orthogonal Least Squares

Orthogonal Least Squares: In ordinary least squares, we try to minimize the sum of the vertical squared distances between the observed points and the fitted line. In orthogonal least squares, we try to fit a line which minimizes the sum of the squared distances between the observed points and the...

View Full Description

Precision

Precision: Precision is the degree of accuracy with which a parameter is estimated by an estimator. Precision is usually measured by the standard deviation of the estimator and is known as the standard error. For example, the sample mean is used to estimate the population mean and the precision of...

View Full Description

Regression Analysis

Regression Analysis: Regression analysis provides a "best-fit" mathematical equation for the relationship between the dependent variable (response) and independent variable(s) (covariates). There are two major classes of regression - parametric and non-parametric. Parametric regression requires choice of the regression equation with one or a greater number of unknown parameters. Linear...

View Full Description

Residuals

Residuals: Residuals are differences between the observed values and the values predicted by some model. Analysis of residuals allows you to estimate the adequacy of a model for particular data; it is widely used in regression analysis . Browse Other Glossary Entries

View Full Description

Resistance

Statistical Glossary Resistance: Resistance, used with respect to sample estimators, refers to the sensitivity of the estimator to extreme observations. Estimators that do not change much with the addition of deletion of extreme observations are said to be resistant. The median is a resistant estimator of central tendency while the...

View Full Description

Backward Elimination

Backward Elimination: Backward elimination is one of several computer-based iterative variable-selection procedures. It begins with a model containing all the independent variables of interest. Then, at each step the variable with smallest F-statistic is deleted (if the F is not higher than the chosen cutoff level). A related procedures is...

View Full Description

Simple Linear Regression

Simple Linear Regression: The simple linear regression is aimed at finding the "best-fit" values of two parameters - A and B in the following regression equation:   Yi = A Xi + B + Ei,     i=1,¼,N where Yi, Xi, and Ei are the values of the dependent variable, of the...

View Full Description

Uplift or Persuasion Modeling

Uplift or Persuasion Modeling: A combination of treatment comparisons (e.g. send a sales solicitation, or send nothing) and predictive modeling to determine which cases or subjects respond (e.g. purchase or not) to which treatments. Here are the steps, in conceptual terms, for a typical uplift model: 1. Conduct A-B test,...

View Full Description

Step-wise Regression

Step-wise Regression: Step-wise regression is one of several computer-based iterative variable-selection procedures. Variables are added one-by-one based on their contribution to R-squared, but first, at each step we determine whether any of the variables (already included in the model) can be removed. If none of the variables can be removed,...

View Full Description

Sufficient Statistic

Sufficient Statistic: Suppose X is a random vector with probability distribution (or density) P(X | V), where V is a vector of parameters, and Xo is a realization of X. A statistic T(X) is called a sufficient statistic if the conditional probability (density) P(X | T(Xo); V) does not depend...

View Full Description

Variable-Selection Procedures

Variable-Selection Procedures: In regression analysis, variable-selection procedures are aimed at selecting a reduced set of the independent variables - the ones providing the best fit to the model. The criterion for selecting is usually the following F-statistic:   F(x1,...,xp; xp+1) =  SSE(x1,...,xp) - SSE(x1,...,xp, xp+1) SSE(x1,...,xp) (n-p-1), where n is...

View Full Description

Alpha Spending Function

Alpha Spending Function: In the interim monitoring of clinical trials, multiple looks are taken at the accruing results. In such circumstances, akin to multiple testing, the alpha-value at each look must be adjusted in order to preserve the overall Type-1 Error. Alpha spending functions, (the Pocock family is one such...

View Full Description

Attribute

Attribute: In data analysis or data mining, an attribute is a characteristic or feature that is measured for each observation (record) and can vary from one observation to another. It might measured in continuous values (e.g. time spent on a web site), or in categorical values (e.g. red, yellow, green)....

View Full Description