Register today for our Generative AI Foundations course. Use code GenAI99 for a discount price of $99!
Skip to content

Kolmogorov-Smirnov Two-sample Test

Kolmogorov-Smirnov Two-sample Test: The Kolmogorov-Smirnov two-sample test is a test of the null hypothesis that two independent samples have been drawn from the same population (or from populations with the same distribution). The test uses the maximal difference between cumulative frequency distributions of two samples as the test statistic. Browse...

View Full Description

Kruskal – Wallis Test

Kruskal - Wallis Test: The Kruskal-Wallis test is a nonparametric test for finding if three or more independent samples come from populations having the same distribution. It is a nonparametric version of ANOVA. Browse Other Glossary Entries

View Full Description

Level Of Significance

Level of Significance: In hypothesis testing, you seek to decide whether observed results are consistent with chance variation under the "null hypothesis," or, alternatively, whether they are so different that chance variability can be ruled out as an explanation for the observed sample. The range of variation of samples that...

View Full Description

Likelihood Ratio Test

Likelihood Ratio Test: The likelihood ratio test is aimed at testing a simple null hypothesis against a simple alternative hypothesis. (See Hypothesis for an explanation of "simple hypothesis"). The likelihood ratio test is based on the likelihood ratio r as the test statistic:   r =  P(X | H1) P(X...

View Full Description

Lilliefors Statistic

Statistical Glossary Lilliefors Statistic: The Lilliefors statistic is used in a goodness-of-fit test of whether an observed sample distribution is consistent with normality. The statistic measures the maximum distance between the observed distribution and a normal distribution with the same mean and standard deviation as the sample, and assesses whether...

View Full Description

Lilliefors test for normality

Statistical Glossary Lilliefors test for normality: The Lilliefors test is a special case of the Kolmogorov-Smirnov goodness-of-fit test. In the Lilliefors test, the Kolmogorov-Smirnov test is implemented using the sample mean and standard deviation as the mean and standard deviation of the theoretical (benchmark) population against which the observed sample...

View Full Description

Mantel-Cox Test

Mantel-Cox Test: The Mantel-Cox test is aimed at testing the null-hypothesis that survival function s don´t differ across groups. Browse Other Glossary Entries

View Full Description

Mean Score Statistic

Statistical Glossary Mean Score Statistic: The mean score statistic is one of the statistics used in the generalized Cochran-Mantel-Haenszel tests . It is applicable when the response levels (columns) are measured at an ordinal scale . If the two variables are independent of each other in all strata, the asymptotic...

View Full Description

Multiple Comparison

Multiple Comparison: Multiple comparisons are used in the same context as analysis of variance (ANOVA) - to check whether there are differences in population means among more than two populations. In contrast to ANOVA, which simply tests the null hypothesis that all means are equal, multiple comparisons procedures help you...

View Full Description

Nonparametric Tests

Nonparametric Tests: In statistical inference procedures (hypothesis tests and confidence intervals), nonparametric procedures are those that are relatively free of assumptions about population parameters. For an example of a nonparametric test, see sign test. See also parametric tests. Browse Other Glossary Entries

View Full Description

Normality Tests

Normality Tests: Normality tests are tests of whether a set of data is distributed in a way that is consistent with a normal distribution. Typically, they are tests of a null hypothesis that the data are drawn from a normal population, specifically a goodness-of-fit test. Hence, while it is possible...

View Full Description

Null Hypothesis

Null Hypothesis: In hypothesis testing, the null hypothesis is the one you are hoping can be disproven by the observed data. Typically, it asserts that chance variation is responsible for an effect seen in observed data (for example, a difference between treatment and placebo, an apparent correlation between one variable...

View Full Description

p-value

p-value: The p-value is the probability that the null model could, by random chance variation, produce a sample as extreme as the observed sample (as measured by some sample statistic of interest.) Browse Other Glossary Entries

View Full Description