Term
|
Definition
o Description (descriptive statistics) • Describe data that has been collected • Used to reveal the the distribution of the data in each variable • Commonly used descriptive statistics measures of central tendency, and standard deviations o Prediction (inferential statistics) |
|
|
Term
|
Definition
are a set of tools (methods) used to organize and analyze data (but just because you have statistics, does not mean research is high quality) |
|
|
Term
(Statistics) Distributions |
|
Definition
o Graphic representation of data o Line formed by connecting data points is called a frequency distribution. This line can take many shapes. o Single most important shape is that of the bell-shaped curve – characterizes the distribution as “normal.” o As a frequency distribution approaches a normal curve, generalizations about the data set from which the distribution was derived can be made with greater clarity o Important to remember that not all frequency distributions approach a normal curve. Some are skewed, but don’t focus on that. o When a frequency distribution is skewed, the characteristics inherent to a normal curve no longer apply o Mean, Median, and Mode – KNOW |
|
|
Term
Rules of Thumb for Measures of Central Tendency Use….. If…… |
|
Definition
o Mean to describe the middle of a set of data that does not have outliers. An outlier is a data value that is much higher or lower than the other data values in the set. o Median is the middle value in the set when the numbers are arranged in order. For a set containing an even number of data items, the median is the mean of the two middle data values. Use the median to describe the middle of a set of data that does have an outlier. o Mode is the data item that occurs the most times. It is possible for a set of data to have no mode, one mode, or more than one mode. Use the mode when choosing the most popular item. |
|
|
Term
|
Definition
o Is a statistic that tells you how tightly all the various examples are clustered around the mean in a set of data. o When the examples are pretty tightly bunched together and the bell-shaped curve is steep, the standard deviation is small. o When the examples are spread apart and the bell curve is relatively flat, there is a relatively large standard deviation |
|
|
Term
Normal Distribution and 68-95-99.7 Rule |
|
Definition
• If the mean is 100, expect 50 below the mean, and 50 above the mean • Standard deviation is 15 above or below 100. • 100 is the “peak.” Half above 100, half below 100. • 68% will be between 15 below and 15 above – or between 85 – 115 (one standard deviation away) • 95% will be between two standard deviations away. • 99.7% will be between three standard deviations away. |
|
|
Term
|
Definition
• Used to draw conclusions and make predictions based on the descriptions of data. • These predictions are related to probability |
|
|
Term
|
Definition
• Probability is the chance that a phenomenon has of occurring randomly. • Shown as p (the "p" level or level of significance). • The smaller the level of significance (e.g., p<.001), the greater confidence in rejecting the null hypothesis. • In other words, p<.001, suggests that there is a 1/1000 chance that the statistical finding reported would occur by chance • As a general rule P<.05 is the minimum standard in the field |
|
|
Term
Statistical Tests for Analyzing Differences |
|
Definition
|
|
Term
T-Tests (reported as t value) - |
|
Definition
A statistical test used to determine if the scores of two groups differ on a single variable. A Paired t-test could be used to determine if the scores of the same participants in a study differ under different conditions. It is often used in pre-post designs. See the Adventure Learning Study **** |
|
|
Term
ANOVA (Analysis of Variance) (reported as F value) - |
|
Definition
A method of statistical analysis used to determine differences among the means of two or more groups on a variable. |
|
|
Term
ANCOVA (Analysis of Co-Variance) (reported as F value) - |
|
Definition
A method used to test differences in the means of dependent variables for two groups, controlling for the effects of selected variables that may co-vary with the dependent variable. In other words, if the researcher has evidence of an existing difference between 2 or more groups that might influence the dependent variable, then ANCOVA should be selected to statistically adjust for the difference. |
|
|
Term
Non Parametric Test for Analyzing Differences - Chi Square |
|
Definition
Chi-Square (X2)– Non parametric (without the assumption of normal distribution) method to test the difference between an actual sample and another hypothetical or previously established distribution. |
|
|
Term
|
Definition
o Statistical procedures to simultaneously analyze the effects of multiple dependent variables on an independent variable o Examples include: MANOVA, MANCOVA, Multiple Regression Analysis o Results are reported in terms of correlations between variables |
|
|
Term
|
Definition
o Correlation denotes positive or negative association between variables in a study. o Two variables are positively associated when larger values of one tend to be accompanied by larger values of the other. EX: Do homework, scores go up o The variables are negatively associated when larger values of one tend to be accompanied by smaller values of the other EX: If you do homework, and scores go down |
|
|
Term
|
Definition
o The correlation coefficient is used to indicate the relationship of two random variables. It provides a measure of the strength and direction of the correlation varying from -1 to +1. o .8 correlation, stronger than .3, since it is closer to 1. o Positive values (the positive sign is understood) indicate that the two variables are positively correlated. Negative values indicate that the two variables are negatively correlated. o Values close to +1 or -1 reveal the two variables are highly related. |
|
|
Term
|
Definition
Within quantitative research relationships between variables can be significant without being meaningful |
|
|
Term
|
Definition
is a way of showing the strength of association between variables. Effect sizes complement inferential statistics such as p values. o An effect size of d = 1.0 for a reading program means that the reading program increased the reading score of the average student to one standard deviation above the mean. A negative effect size of d = -1 means that the reading score of the average student in the program decreased by one standard deviation below the mean. o Generally, an Effect size of .2 is considered small; .5 moderate, and .8 large. o For example, Adventure Learning, small effect size, and was not reported o Big push in research |
|
|