Register today for our Generative AI Foundations course. Use code GenAI99 for a discount price of $99!
Skip to content

Latent Profile Analysis (LPA)

Latent Profile Analysis (LPA): Latent profile analysis is concerned with deriving information about categorical latent variable s from the observed values of continuous manifest variable s. In other words, LPA deals with fitting latent profile models (a special kind of latent variable models ) to the measured data. To understand...

View Full Description

Latent Trait Analysis (LTA)

Latent Trait Analysis (LTA): Latent trait analysis is concerned with deriving information about continuous latent variable s from the observed values of categorical manifest variable s. In other words, LTA deals with fitting latent trait models (a special kind of latent variable models ) to the measured data. To understand...

View Full Description

Multidimensional Scaling

Multidimensional Scaling: Multidimensional scaling (MDS) is an approach to multivariate analysis aimed at producing a spatial or geometrical representation of complex data. MDS helps to explain the observed distance matrix or dissimilarity matrix for a set of N objects in terms of a much smaller number (m<<N) of underlying dimensions....

View Full Description

Multiple analysis of covariance (MANCOVA)

Multiple analysis of covariance (MANCOVA): Multiple analysis of covariance (MANCOVA) is similar to multiple analysis of variance (MANOVA) , but allows you to control for the effects of supplementary continuous independent variables - covariate s. If there are some covariates, MANCOVA should be used instead of MANOVA. Covariates are variables...

View Full Description

Multiple analysis of variance (MANOVA)

Multiple analysis of variance (MANOVA): MANOVA is a technique which determines the effects of independent categorical variables on multiple continuous dependent variables. It is usually used to compare several groups with respect to multiple continuous variables. The main distinction between MANOVA and ANOVA is that several dependent variables are considered...

View Full Description

Multiple Correspondence Analysis (MCA)

Multiple Correspondence Analysis (MCA): Multiple correspondence analysis (MCA) is an extension of correspondence analysis (CA) to the case of more than two variables. The initial data for MCA are three-way or m-way contingency tables. In case of three variables, a common approach to MCA is to combine the two least...

View Full Description

Multiple discriminant analysis (MDA)

Multiple discriminant analysis (MDA): Multiple Discriminant Analysis (MDA) is an extension of discriminant analysis ; it shares ideas and techniques with multiple analysis of variance (MANOVA) . The goal of MDA is to classify cases into three or more categories using continuous or dummy categorical variables as predictors. Synonyms for...

View Full Description

Partial correlation analysis

Partial correlation analysis: Partial correlation analysis is aimed at finding correlation between two variables after removing the effects of other variables. This type of analysis helps spot spurious correlations (i.e. correlations explained by the effect of other variables) as well as to reveal hidden correlations - i.e correlations masked by...

View Full Description

Path Analysis

Path Analysis: Path analysis is a method for causal modeling . Consider the simple case of two independent variables x1 and x2 and one dependent variable. Path analysis splits the contribution of x1 and x2 to the variance of the dependent variable y into four components (paths): two direct paths...

View Full Description

Principal Component Analysis

Principal Component Analysis: The purpose of principal component analysis is to derive a small number of linear combinations (principal components) of a set of variables that retain as much of the information in the original variables as possible. Browse Other Glossary Entries

View Full Description

Principal components analysis

Principal components analysis: The purpose of principal component analysis is to derive a small number of linear combinations (principal components) of a set of variables that retain as much of the information in the original variables as possible. This technique is often used when there are large numbers of variables,...

View Full Description

Reciprocal Averaging

Reciprocal Averaging: Reciprocal averaging is a widely used algorithm for correspondence analysis . The correspondence analysis itself is sometimes also called reciprocal averaging. The initial data set is a two-way contingency table , representing the frequency of particular combinations of values of two categorical variables . The algorithm is iterative...

View Full Description

Machine Learning

Machine Learning: Analytics in which computers "learn" from data to produce models or rules that apply to those data and to other similar data. Predictive modeling techniques such as neural nets, classification and regression trees (decision trees), naive Bayes, k-nearest neighbor, and support vector machines are generally included. One characteristic...

View Full Description

Multiplicity Issues

Multiplicity Issues: Multiplicity issues arise in a number of contexts, but they generally boil down to the same thing: repeated looks at a data set in different ways, until something "statistically significant" emerges. See multiple comparisons for how to handle multiple pairwise testing in conjunction with ANOVA. In observational studies,...

View Full Description

False Discovery Rate

False Discovery Rate: A "discovery" is a hypothesis test that yields a statistically significant result. The false discovery rate is the proportion of discoveries that are, in reality, not significant (a Type-I error). The true false discovery rate is not known, since the true state of nature is not known...

View Full Description