Term
|
Definition
the relationship between 2 variables. it tells us the magnitude and direction of the relationship through the correlation coefficient. |
|
|
Term
most common correlation coefficient |
|
Definition
pearson correlation coefficien |
|
|
Term
equation of a straight line |
|
Definition
y = a + bX
a= y intercept and b = slope
|
|
|
Term
4 reasons why correlation may or may not imply causation |
|
Definition
1. X caused Y
2. Y caused X
3. The correlation between x and y was spurious
4. A third variable was responsible for the correlation between x and y |
|
|
Term
three relevant questions about correlation |
|
Definition
1. is there a relationship between the variables.
2. if so, whats the strength of the relationship?
3. whats the nature of the relationship? |
|
|
Term
what numbers can the pearson correlation coefficient range from?
|
|
Definition
-1 to +1
0 means no correlation, -1 is perfect negative relationship, +1 is perfect positive relationship. Likelihood of finding a perfect or no relationship is very slim. |
|
|
Term
how is magnitude determined? |
|
Definition
how far the correlation coefficient is from 0 |
|
|
Term
|
Definition
the prediction of one variable form one or more other variables |
|
|
Term
|
Definition
Y^= a + bX
a=slope of regression line
b= y intercept |
|
|
Term
|
Definition
dependent variable, y, the variable thats being predicted |
|
|
Term
|
Definition
independent variable, x, the variable from what predictions are made |
|
|
Term
standard error of the estimate |
|
Definition
amount of error when predicting y scores from x, also known as the residual |
|
|
Term
|
Definition
two things that are related but not linear |
|
|
Term
|
Definition
when you restrict a range of your sample data |
|
|
Term
|
Definition
can cause a strong relationship to appear weak and vice versa |
|
|
Term
standard error of the difference |
|
Definition
The standard deviation of the sampling distribution of the difference between two independent means. It indicates how much sampling error will occur on average. |
|
|
Term
standard error of the estimate |
|
Definition
computing the standard error of the mean but taking into account degrees of freedom |
|
|
Term
|
Definition
needs to be used in place of a z score when the standard error of the mean is unknown |
|
|
Term
independent groups t-test |
|
Definition
used when
1. the dependent variable quantitative and measured on an interval level
2. the independent variable is between subjects in nature
3. the independent variable has 2 and only 2 levels |
|
|
Term
|
Definition
|
|
Term
|
Definition
when you subtract the grand mean from all of your other scores |
|
|
Term
|
Definition
amount of unexplained variability in the dependent variable, the variability that remains after the effects of the independent variable are removed |
|
|
Term
|
Definition
the total variability in the dependent variable |
|
|
Term
|
Definition
the amount of influence the independent variable had on the dependent variable |
|
|
Term
|
Definition
indexes the strength of the relationship between the IV and DV, proportion of variability in the DV that is associated with the IV |
|
|
Term
weak, strong, average eta squared scores |
|
Definition
.05= weak
.10 = average
.15 = strong |
|
|
Term
|
Definition
used when
1. the dependent variable is quantitative and is measured on an interval level
2. independent variable is WITHIN SUBJECTS in nature (why is why it differs from independent groups t-test)
3. the independent variable has 2 and only 2 levels |
|
|
Term
advantages of correlated groups t-test:
|
|
Definition
controls for disturbance variables which makes it easier to detect a relationship, provides for a more sensitive test |
|
|
Term
the most commong disturbance variable |
|
Definition
|
|
Term
sampling distribution of the mean of difference scores |
|
Definition
theoretical distrubution consisiting of mean difference scores across all individuals in a sample for all possible random samples |
|
|
Term
assumptions of the correlated groups t-test |
|
Definition
the sample is independently and randomly selected, the population of difference scores is normally distributed, the dependent variable is quantitative in nature and measured on an interval level |
|
|
Term
eta squared in a correlated groups t-test |
|
Definition
the proporation of variability in the DV that associated with the IV after variability from individual differences has been removed |
|
|
Term
downside of correlated groups t-test |
|
Definition
since it is within subjects design it could hold some carryover effects |
|
|
Term
|
Definition
participants receive all levels of the IV |
|
|
Term
|
Definition
values of the IV are split up between participants |
|
|
Term
what happens when you calculate pooled variance |
|
Definition
the variance should turn out to be the same for the groups |
|
|
Term
why is nullifying important |
|
Definition
you need to remove conditions in the independent groups t-test..in the correlated groups t-test it removes individual differences |
|
|
Term
|
Definition
eliminates familiarity or intervening events when using within subjects design |
|
|
Term
|
Definition
describe relationships between variables, make inferences about populations |
|
|
Term
|
Definition
used for estimating variance after looking at several samples where the mean may vary but the variance is thought to be the same |
|
|
Term
assumption of the independent groups t-test |
|
Definition
the population variances are homogenous .. also called the homogeneity of variance "o1 2 = o22 = 02" |
|
|
Term
|
Definition
explains the proportion of variability in the dependent variable that can be explained by the independent variable
aka the coefficient of determination |
|
|
Term
|
Definition
the coefficient of alienation |
|
|