Parametric Tests: In statistical inference procedures (hypothesis tests and confidence intervals), parametric procedures are those that incorporate assumptions about population parameters. See also nonparametric tests. Browse Other Glossary Entries
View Full Description
Permutation Tests: A permutation test involves the shuffling of observed data to determine how unusual an observed outcome is. A typical problem involves testing the hypothesis that two or more samples might belong to the same population. The permutation test proceeds as follows: 1. Combine the observations from all the...
View Full Description
Statistical Glossary Pivotal Statistic: A statistic is said to be pivotal if its sampling distribution does not depend on unknown parameters. Pivotal statistics are well suitable for statistical test s - because this property allows you to control type I error , irrespective of any unknown parameters of the population...
View Full Description
Post-hoc tests: Post-hoc tests (or post-hoc comparison tests) are used at the second stage of the analysis of variance (ANOVA) or multiple analysis of variance (MANOVA) if the null hypothesis is rejected. The question of interest at this stage is which groups significantly differ from others in respect to the...
View Full Description
Power of a Hypothesis Test: The power of hypothesis test is a measure of how effective the test is at identifying (say) a difference in populations if such a difference exists. It is the probability of rejecting the null hypothesis when it is false. Browse Other Glossary Entries
View Full Description
Randomization Test: See permutation tests. Browse Other Glossary Entries
View Full Description
Rejection Region: See Acceptance region Browse Other Glossary Entries
View Full Description
Statistical Glossary Relative Efficiency (of tests): The relative efficiency of two tests is a measure of the relative power of two tests. Suppose tests 1 and 2 are tests for the same null-hypothesis and at the same significance level "alpha" (probability of type I error). The relative efficiency of test...
View Full Description
Sensitivity: Sensitivity (of a medical diagnostic test for a disease) is the probability that the test is positive for a person with the disease. Sensitivity itself is not sufficient to characterize a test. For example, a test reporting all subjects who take the test as "positive" (having the disease) has...
View Full Description
Sign Test: The sign test is a nonparametric test used with paired replicates to test for the difference between the 1st and the 2nd measurement in a group of "subjects". For each pair, you assign a "1" if the 1st measurement has the larger value, a "0" if the 2nd...
View Full Description
Significance Testing: See Hypothesis Testing Browse Other Glossary Entries
View Full Description
Specificity: Specificity (of a medical diagnostic test for a disease) is the probability that the test will come out negative for a person without the disease. Specificity itself is not sufficient to characterize a test. For example, a test reporting all subjects who take the test as "negative" (not having...
View Full Description
Statistical Test: A statistical test is a procedure for statistical hypothesis testing . The outcome of a statistical test is a decision to reject or accept the null hypothesis for given probability of type I error . The outcome is frequently reported as p-value - the minimal level of significance...
View Full Description
t-statistic: T-statistic is a statistic whose sampling distribution is a t-distribution. Often, the term "t-statistic" is used in a narrower sense - as the standardized difference between a sample mean and a population mean m, where N is the sample size: where and are the mean and the standard deviation...
View Full Description
t-test: A t-test is a statistical hypothesis test based on a test statistic whose sampling distribution is a t-distribution. Various t-tests, strictly speaking, are aimed at testing hypotheses about populations with normal probability distribution. However, statistical research has shown that t-tests often provide quite adequate results for non-normally distributed populations...
View Full Description
Tukey´s HSD (Honestly Significant Differences) Test: This test is used for testing the significance of unplanned pairwise comparisons. When you do multiple significance tests, the chance of finding a "significant" difference just by chance increases. Tukey´s HSD test is one of several methods of ensuring that the chance of finding...
View Full Description
Two-Tailed Test: A two-tailed test is a hypothesis test in which the null hypothesis is rejected if the observed sample statistic is more extreme than the critical value in either direction (higher than the positive critical value or lower than the negative critical value). A two-tailed test this has two...
View Full Description
Type I Error: In a test of significance, Type I error is the error of rejecting the null hypothesis when it is true -- of saying an effect or event is statistically significant when it is not. The projected probability of committing type I error is called the level of...
View Full Description
Type II Error: In a test of significance, Type II error is the error of accepting the null hypothesis when it is false -- of failing to declare a real difference as statistically significant. Obviously, the bigger your samples, the more likely your test is to detect any difference that...
View Full Description
Statistical Glossary Variance/Mean Ratio Test: The variance/mean ratio (VMR) test is a statistical test used to test the null hypothesis that the variance/mean ratio is 1.0. The VMR test is usually dealt with as a one-sided test because each direction of departure from the null hypothesis has a radically different...
View Full Description