Term
|
Definition
| measurement of the relationship or association between variables |
|
|
Term
| correlation: levels of measurement |
|
Definition
| 2 continuous or 1 continuous, 1 dichotomous categorical; MUST be equal interval |
|
|
Term
| correlation: research question |
|
Definition
| do individuals who have a low/high score on 1 variable also have a corresponding low/high score on another variable? for each unit change in X, is there a corresponding unit change in Y? how strong is the relationship between these two variables? |
|
|
Term
|
Definition
| normality, linearity (HIGHLY susceptible to outliers), equal interval variables, independence of observations, |
|
|
Term
|
Definition
| bivariate (2 variables, measured with Pearson's product-moment correlation R, or Spearman's rho for ordinal/high outlier data) or partial (explore relationship b/t 2 variables while statistically controlling for a third) |
|
|
Term
| benefits of partial correlation |
|
Definition
| provides a more clear, accurate depiction of actual relationship between 2 variables |
|
|
Term
| example of correlation write up |
|
Definition
| Results shows a strong, positive correlation between posting stats shit on Facebook and being a total dork (r = .88, p < .01). This relationship remained significant, even after controlling for number of hours spent drooling over David Klonsky's LCA paper ( r = .66, p < .01). |
|
|
Term
| correlation: measurement (2) |
|
Definition
strength/ magnitude (how spread are data around line of best fit? tight or loose?) direction (+ or -) low~ low, high~high, low~high, high~low |
|
|
Term
|
Definition
| calculated using z-scores (need z's because of different scales, with different means and SDs) cross products= zscore of x X zscore of y = avg of cross products= R |
|
|
Term
| Pearson's correlation (r) |
|
Definition
| correlation coefficient r= regression, sum of cross products of the z scores/ n = + - 1 |
|
|
Term
|
Definition
| correlation coefficient when using ordinal data or highly skewed (lots of outlier) data |
|
|
Term
| small, medium, large correlations |
|
Definition
| .1-2.9, .3-.49, .5 and up |
|
|
Term
| 2 methods to test for outliers |
|
Definition
histogram (univariate) scatterplot (multivariate) |
|
|
Term
|
Definition
| consistency, stability of a measure. necessary but not sufficient for validity of the measure |
|
|
Term
|
Definition
| degree to which an instrument measures the construct intended, accuracy |
|
|
Term
| small, medium, large measure of reliability |
|
Definition
.45-.65 shitty .70-.80 acceptable >.80 optimal .99= measuring same thing? context needed |
|
|
Term
|
Definition
| Likert, how often behavior or event has occured, opinion as to how strong a person feels about something. Assume linear relationship. neutral point? can use as interval data to use parametric tests. need normality of distribution for certain tests. |
|
|
Term
|
Definition
| empirical data from judges to ensure attitudes/behaviors being measured are spaced along continuum of equal rating |
|
|
Term
|
Definition
| hierarchical scaling technique that ranks items such that individuals who agree with higher ranked item will also agree with items of lower rank |
|
|
Term
|
Definition
| should be done w a smaller sample, but large enough to allow for a cronbach's alpha greater than .70, determine which items should be kept/deleted |
|
|
Term
|
Definition
| extraction, principal component analysis, varimax/oblimination rotation |
|
|
Term
|
Definition
| principal component analysis- explores relationship between variables, provides basis for removal of uncessary items and ID subscales/domains |
|
|
Term
|
Definition
| factor rotation that maximized loadings of variables on different subscales. varimax with correlated. Oblimin with uncorrelated. ID number of domains in measure |
|
|
Term
|
Definition
| degree to which an instrument is related to operationally defined theory and concepts |
|
|
Term
| construct validity:measurements |
|
Definition
contrasted groups- 2 groups high and low known, means should differ hypothesis testing- theoretical factor analysis- related items kept, exploratory and confirmatory |
|
|
Term
| convergent/discriminant validity: measurements |
|
Definition
| multitrait- multimethod approach- MTMM 2 more constructs with 2 more methods, correlation matrix for relationships between traits |
|
|
Term
|
Definition
| face and content validity- |
|
|
Term
|
Definition
| items measure the complete range of the attribute under study (i.e. not 4 types of NSSI, but 14) determined by lit review, experts, population sampling, largest pooling then reduced with factor analysis |
|
|
Term
|
Definition
| instrument looks like it measures the construct of interest, subjective, |
|
|
Term
| criterion validity (4 types) |
|
Definition
| concurrent, predictive, convergent, discriminant; relationship between measurement and construct based on performance on another variable |
|
|
Term
|
Definition
| scores on a measure correlated to a related criterion at the same point in time |
|
|
Term
| concurrent validity: measurements |
|
Definition
| exploratory factor analysis, eigen >1; confirmatory factor analysis, loading on each subscale >,4; principal components analysis ( 5 pps per variable) ex. ISAS and SITBI |
|
|
Term
|
Definition
| degree to which scores predict performance on some future criterion |
|
|
Term
| predictive validity: measurements |
|
Definition
correlations, regressions ex. ISAS behavioral forecast predicts NSSI |
|
|
Term
|
Definition
| correspondence between constructs that are theoretically similar. |
|
|
Term
| convergent validity: measurements |
|
Definition
| correlations. ex ISAS and MSI-BPD |
|
|
Term
|
Definition
| measurement differentiates between constructs that are theoretically different |
|
|
Term
| discriminant validity: measurements |
|
Definition
| MT-MTT, weak/no correlation. ex ISAS and SITBI gesture, suicide attempt |
|
|
Term
| content validity: measurements |
|
Definition
| content validity ration (CVR) or content validity intex (CVI) depend on number of experts (+7) |
|
|
Term
|
Definition
| similar scores at different times, 2 weeks-1 month, not due to chance, some states do change, so not useful |
|
|
Term
|
Definition
| cronbach's alpha. inter-item correlations to determine if items measure some construct. how well they "hang" together. can add items to increase. if over .9 might be measuring same thing |
|
|
Term
| parralel/ alternate forms of reliability |
|
Definition
| different items pool to test same concepts, ie day and night measure. |
|
|
Term
| Magnusson quote as to why we need good measures |
|
Definition
| "models are never more accurate, reliable, or valid than the measures you put into them" |
|
|
Term
|
Definition
| use correlations in a systematic/meaningful way to test hypotheses. predict/ calculate any value of y (DV) from a value of x (IV). based on probabilities |
|
|
Term
|
Definition
| R squared multiple correlation. how much of DV's variance is accounted for by the IV. 0-1. always positive. closer to 1= greater amount of variance explained. tests magnitude of prediction and mediation. |
|
|
Term
|
Definition
y= DV Bo= intercept, constant B1= slope/ beta weight/ regression weight X= value of IV err= random, unsystematic error USE UNSTANDARDIZED SCORES |
|
|
Term
| assumptions of regressions |
|
Definition
| linearity/normality, independence and homogeneity of error, heteroscedasticity- variability at one point is similar to variability at different point. |
|
|
Term
|
Definition
| predict DV from IV OR >1 IV from 1 DV (mediation) |
|
|
Term
|
Definition
| how far case is from other cases |
|
|
Term
|
Definition
| how in line with linear trend |
|
|
Term
| regressions standardized coefficients |
|
Definition
| interpreted as correlation strength, direction, significance. |
|
|
Term
|
Definition
| measures are overly related, may be measuring same thing. everything is significant when testing reliablity |
|
|
Term
| variable centered approach |
|
Definition
| given prior behavior, scores, genetic markers, contextual risk and protective factors, individuals are interchangeable units who apart from random error, do not differ qualitatively or quantitatively from each other. |
|
|
Term
|
Definition
| how individuals change or behave. how individuals function, holistic. do not assume normality, less sensitive than parametric tests and may fail to detect differences that actually exist (type I error) |
|
|
Term
|
Definition
| developmental processes are active, integrated, complex, dynamic, adaptive in relation to greater system |
|
|
Term
|
Definition
| drawing false conclusions about individual behavior from population behavior |
|
|
Term
|
Definition
| drawing false conclusions about aggregate behavior from individuals |
|
|
Term
| types of person centered analyses |
|
Definition
classification (cluster and LCA) hybrid classification (growth mix model) single-subject methods (dynamic factor analysis) variable oriented methods: latent growth curve modeling |
|
|
Term
|
Definition
| examine how groups of individuals are similar to one another, and different from individuals in other groups/clusters. exploratory in nature |
|
|
Term
| hierarchical cluster analysis |
|
Definition
| small >50 data sets standardized continuous data |
|
|
Term
|
Definition
| moderate sample size, standardized continuous data, assign cluster then recheck means to maximize differences b/t groups. may or may not ask SPSS to create a specific number of groups |
|
|
Term
|
Definition
large data sets, computer selects number of clusters based on most relevant variables, maximize differences bt clusters, can use continuous, discrete nominal data. step 1= preclustering put new in with preclusters or make a new one. step 2= use hierarchical techniques to make two clusters then assign to clusters |
|
|
Term
|
Definition
Shwarz's Bayesian ID criterion; change of more than one= best number of clusters exactly how much more likely it is to get an individual from group a vs group b |
|
|
Term
|
Definition
log likelihood estimates (MLE or LSE) with mix of variable types Euclidian distance same standardized variables |
|
|
Term
|
Definition
| ID unique classes of cases like cluster analysis with categorical data. estimation uses MLE. assumes variables are uncorrelated within classes, although often not the case |
|
|
Term
|
Definition
| determine latent variables (ones that cannot be measured directly)- circles based on observed/ indicator variables- boxes. need 4 |
|
|
Term
| example of regression write up |
|
Definition
| Time spent oogling over David Klonsky's LCA is predictive of desire to TA stats for smelly undergrads, such that more time spent oogling was significantly positively associated with desire to TA smelly undergrads (r = .66, p < .01). Time spent oogling explained 80% of the variance in desire to TA smelly undergrads. |
|
|