Term
|
Definition
| hypothesis-testing procedure in which the population variance is unknown; it compares t scores from a sample to a comparison distribution called a t distribution |
|
|
Term
| t test for a single sample |
|
Definition
| hypothesis-testing procedure in which a sample mean is being compared to a known population mean and the population variance is unknown |
|
|
Term
|
Definition
| estimate of a population parameter that is likely systematically to overestimate or underestimate the true value of the population parameter. |
|
|
Term
| unbiased estimate of the population variance |
|
Definition
| estimate of the population variance, based on sample scores, which has been corrected so that it is equally likely to overestimate or underestimate the true population variance; the correction used is dividing the sum of squared deviations by the sample size minus 1, instead of the usual procedure of dividing by the sample size directly |
|
|
Term
|
Definition
| number of scores free to vary when estimating a population parameter; usually part of a formula for making that estimate--for example, in the formula for estimating the population variance from a single sample, the degrees of freedom is the number of scores minus 1 |
|
|
Term
|
Definition
| mathematically defined curve that is the comparison distribution used in a t test |
|
|
Term
|
Definition
| mathematically defined curve that is the comparison distribution used in a t test |
|
|
Term
|
Definition
| on a t distribution, number of standard deviations form the mean (like a z score, but on a t distribution) |
|
|
Term
| t test for dependent means |
|
Definition
| hypothesis-testing procedure in which there are two scores for each person and the population variance is not known; it determines the significance of a hypothesis that is being tested using difference or change scores from a single group of people |
|
|
Term
|
Definition
| difference between a person's score on one testing and the same person's score on another testing; often an after score minus a before score |
|
|
Term
|
Definition
| a condition, such as a population's having a normal distribution, required for carrying out a particular hypothesis-testing procedure; a part of the mathematical foundation for the accuracy of the tables used in determining cutoff values |
|
|
Term
| t test for independent means |
|
Definition
| hypothesis-testing procedure in which there are two separate groups of people tested and in which the population variance is not known |
|
|
Term
| distribution of differences between means |
|
Definition
| distribution of differences between means of pairs of samples such that for each pair of means, on is from one population and the other is from a second population; the comparison distribution in a t test for independent means |
|
|
Term
| pooled estimate of the population variance |
|
Definition
| in a t test for independent means, weighted average of the estimates of the population variance from two samples (each estimate weighted by a proportion consisting of its sample's degrees of freedom divided by the total degrees of freedom for both samples) |
|
|
Term
|
Definition
| average in which the scores being averaged do not have equal influence on the total, as in figuring the pooled variance estimated in a t test for independent means |
|
|
Term
| variance of the distribution of differences between means |
|
Definition
| one of the numbers figured as part of a t test for independent means; it equals the sum of the variances of the distributions of means associated with each of the two samples |
|
|
Term
| standard deviation of the distribution of differences between means |
|
Definition
| in a t test for independent means, square root of the variance of the distribution of differences between means |
|
|
Term
| standard deviation of the distribution of differences between means |
|
Definition
| in a t test for independent means, square root of the variance of the distribution of differences between means |
|
|
Term
|
Definition
| hypothesis-testing procedure for studies with three or more groups |
|
|
Term
| within-groups estimate of the population variance |
|
Definition
| in ANOVA, estimate of the variance of the distribution of the population of individuals based on the variation among the scores within each of the actual groups |
|
|
Term
| between-groups estimate of the population variance |
|
Definition
| in ANOVA, estimate of the variance of the population of individuals based on the variation among the means of the groups studied |
|
|
Term
|
Definition
| in ANOVA, ratio of the between-groups population variance estimate to the within-groups population variance estimate; score on the comparison distribution (f distribution) in an ANOVA; also referred to as f |
|
|
Term
|
Definition
| mathematically defined curve that is the comparison distribution used in ANOVA; distribution of f ratios when the null hypothesis is true |
|
|
Term
|
Definition
| table of cutoff scores on the f distribution for various degrees of freedom and significance levels |
|
|
Term
|
Definition
| in ANOVA< overall mean of all the scores, regardless of what group they are in; when group sizes are equal, mean of the group means |
|
|
Term
| between-groups degrees of freedom |
|
Definition
| degrees of freedom used in between-groups estimate of the population variance in an ANOVA (numerator of f ratio); number of scores free to vary (number of means minus 1) in figuring the between-groups estimate of the population variance; same as numerator degrees of freedom |
|
|
Term
| between-groups degrees of freedom |
|
Definition
| degrees of freedom used in between-groups estimate of the population variance in an ANOVA (numerator of f ratio); number of scores free to vary (number of means minus 1) in figuring the between-groups estimate of the population variance; same as numerator degrees of freedom |
|
|
Term
| within-groups degrees of freedom |
|
Definition
| degrees of freedom used in the within-groups estimate of the population variance in ANOVA (denominator of the f ratio); number of scores free to vary (number of scores in each group minus 1, summed over all groups) in figuring within-groups population variance estimate; same as denominator degrees of freedom |
|
|
Term
|
Definition
| in ANOVA, t tests among pairs of means after finding that the f for the overall difference among the means is significant |
|
|
Term
| proportion of variance accounted for (r squared) |
|
Definition
| measure of effect size for ANOVA |
|
|
Term
| factorial analysis of variance |
|
Definition
| ANOVA for a factorial research design |
|
|
Term
| factorial research design |
|
Definition
| way of organizing a study in which the influence of two or more variables is studied at once by setting up the situation so that a different group of people are tested for each combination of the levels of the variables |
|
|
Term
|
Definition
| situation in a factorial ANOVA in which the combination of variables has an effect that could not be predicted from the effects of the two variables individually |
|
|
Term
|
Definition
| ANOVA for a two-way factorial research design |
|
|
Term
| two-way factorial research design |
|
Definition
| factorial design with two variables that each divide the groups |
|
|
Term
|
Definition
| variables that separates groups in ANOVA |
|
|
Term
|
Definition
| variable considered to be a cause, such as what group a person is in for an ANOVA |
|
|
Term
|
Definition
| ANOVA in which there is only one grouping variable (as distinguished from a factorial ANOVA) |
|
|
Term
|
Definition
| difference between groups on one grouping variable in a factorial ANOVA; result for a variable that divides the groups, averaging across the levels of the other variable that divides the groups |
|
|
Term
|
Definition
| variables considered to be an effect |
|
|
Term
|
Definition
| hypothesis-testing procedure used when the variables of interest are nominal variables |
|
|
Term
| chi-square test for goodness of fit |
|
Definition
| hypothesis-testing procedure that examines how well an observed frequency distribution of a single nominal variable fits some expected pattern of frequencies |
|
|
Term
| chi-square test for independence |
|
Definition
| hypothesis-testing procedure that examines whether the distribution of frequencies over the categories of one nominal variable are unrelated to (independent of) the distribution of frequencies over the categories of a second nominal variable |
|
|
Term
|
Definition
| in a chi-square test, number of individuals actually found in the study to be in a category or cell |
|
|
Term
|
Definition
| in a chi-square test, number of people in a category or cell expected if the null hypothesis were true |
|
|
Term
|
Definition
| statistic that reflects the overall lack of fit between the expected and observed frequencies; the sum, over all the categories, of the squared difference between observed and expected frequencies divided by the expected frequency |
|
|
Term
|
Definition
| mathematically defined curve used as the comparison distribution in chi-square tests; distribution of the chi-square statistic |
|
|
Term
|
Definition
| table of cutoff scores on the chi-square distribution for various degrees of freedom and significance levels |
|
|
Term
|
Definition
| two-dimensional chart showing frequencies in each combination of categories of two nominal variables, as in a chi-square test for independence |
|
|
Term
|
Definition
| situation of no relationship between two variables; term usually used regarding two nominal variables in the chi-square test for independence |
|
|
Term
|
Definition
| in chi-square, particular combination of categories for two variables in a contingency table |
|
|