Term
why use multivariate stats? |
|
Definition
1 - evaluate complex causal models with more than 1 independent variable 2 - control for internal validity |
|
|
Term
Three criteria for causal inference using statistical control |
|
Definition
Temporal Order Covariation (established in bivariate analysis) Internal validity (using multivariate) |
|
|
Term
Two major methods of control in statistical analysis |
|
Definition
Physical - examine relationship in subgroups using multiple contingency tables Statistical - use procedures that estimate results we would get using physical control |
|
|
Term
|
Definition
Zero-order partial gamma - gamma for entire sample conditional gamma - gamma for a subtable First order partial gamma - average of all conditional gammas (single value showing the relationship between X and Y across all values of the control variable. |
|
|
Term
Physical control (strengths and weaknesses) |
|
Definition
Strengths High sensitive to discovering conditional relationships Only type of control we can use on nominal data Weaknesses cannot simultaneously control for many variables because we run out of cases does not show if it is the spurious model or intervening model that is causes no relationship to appear. |
|
|
Term
Stat control - using multiple regression and partial correlation |
|
Definition
multiple correlation coefficient (r) multiple coefficient of determination (r^2) Partial correlation coefficient Partial CoD Partial Slopes |
|
|
Term
Assessing multivariate models |
|
Definition
Multiple CC (r)- 0-1, measures goodness of fit Multiple CoD (r^2) - proportion of variation in Y explained by all independent variables |
|
|
Term
using partials to control for extraneous variables |
|
Definition
Partial slopes or partial CC's Zero order - controls for 0 First order - controls for 1 Second order - controls for 2 |
|
|
Term
|
Definition
Y = a + b1X1 + b2X2 + e tells us whether a relationship exists controlling for the other variables in the equation. If there is a relationship, it also tells us the direction. |
|
|
Term
Two types of partial slopes |
|
Definition
Raw or unstandardized Standardized or beta weights |
|
|
Term
Raw or unstandardized partial slope |
|
Definition
change in Y produced by a one unit change in X controlling for the other independent variables in equation. If 0, no relationship If non-zero, use test of significance cannot be used to determine which variable has the greatest effect on Y because it is sensitive to the amount of variation |
|
|
Term
|
Definition
Partial correlation coefficients Gives a single measure to the degree of relationship between the two variables controlling for one or more other variables. -1 to 1 1 is perfect, 0 is no relationship Partial coefficient of determination 0 - 1 the proportion of the variation in the dependent variable that cannot be explained by the control variables but can be explained by the adjusted scores of the independent variable |
|
|
Term
Average Zero Order Correlation |
|
Definition
Partial correlation is the average of the zero order correlations produced when the sample is divided into homogeneous subsets based on the control variable (X2) and the zero order partial correlation is then calculated for each subset |
|
|
Term
|
Definition
Partial correlation between X1 and Y is the zero order correlation between the residuals from the regression of Y on X2 and the residuals from the regression of X1 on X2 |
|
|
Term
Strength and weaknesses of statistical control |
|
Definition
STRENGTHS can control for numerous variables simultaneously get a single summary measure indicating strength of relationship across all categories of the control variables WEAKNESSES no empirical solution to the choice between spurious and intervening models insensitive to conditional relationships multi-colinearity |
|
|
Term
|
Definition
Descriptive measures in sample = statistics (xbar, s, r, etc.) univariate, bivariate, multivariate Descriptive measures in population are parameters (mue, rho) |
|
|
Term
|
Definition
generalizing from the statistics to the parameters you can infer that your sample stats do or do not apply to the population. you might be wrong, but we measure the risk you have of being wrong. |
|
|
Term
General Statistical Test Steps |
|
Definition
formulate hypothesis carry out project and calculate appropriate measure of association calculate probability of obtaining a measure of association at least this large between our variables when there is no such relationship in the population decide whether or not observed relationship applies to population or not. |
|
|
Term
Formal Steps in modern statistical test |
|
Definition
1. formulate research hypothesis - always use specific parameters (H1) 2. formulate Null hypothesis - that the research hypothesis is wrong. includes ALL logical outcomes besides H1 (this is H0)- we directly test the null, and the infer that we accept or reject the H1. 3. calc probability of getting observed sample stat assuming H0 is true. 4. decide - if prob is below .05 we reject h0 (accept h1) if prob is above .05 we accept h0 (reject h1) |
|
|
Term
Formal steps in classical statistical test |
|
Definition
1. formulate research hyp. 2. formulate null hyp. 3. calculate sampling distribution or samp dist of a test stat (actually never do this, just know what it would look like) 4. select significance level or rejection region(s) 5. compute sample stat or test stat and decide on null and infer on research hyp. |
|
|
Term
|
Definition
rejecting a true null hypothesis probability of a type 1 error happening is our level of significance if it is .05, than 5 times out of 100 we are wrong |
|
|
Term
|
Definition
accepting a false null hypothesis inversely related to probability of type 1 error. bigger the sample = weaker the measure of association, therefore weaker the relationships |
|
|
Term
|
Definition
Some sample statistics (r) do not have a known sampling dist. thus we use a stat (called an inferential stat) that has a known distribution (t, or f) mechanics for changing different stats to inferential stats differ, but logic of test is the same. |
|
|
Term
Use of tests of significance on population? |
|
Definition
Critics say - your measure of association for the population occurred by chance (you can give them probability it did...) - or they say - your population is really just a sample when you take time into account |
|
|
Term
statistical significance vs substantive significance |
|
Definition
stat sig = relatioship is there sub sig = how important is this relationship? |
|
|