| Term 
 
        | Scientific method (4 Steps) |  | Definition 
 
        | 1. Objective Measurement: Test in which the evaluator's personal opinion can't bias the results 2. Look for Relationships (these first two steps encompass research methodology) 3. Theory- explanation of relationships 4. Theory must be testable |  | 
        |  | 
        
        | Term 
 
        | 3 Methods for Searching for relationships |  | Definition 
 
        | 1. Experiments   2. Quasi-Experiments   3. Correlational Studies |  | 
        |  | 
        
        | Term 
 
        | Characteristics of:   Experiments |  | Definition 
 
        | -We can infer causality because they are designed to control any outside variables that can affect the outcome -There are GROUPS due to the main variable of interest being Nominal or Categorical -There is manipulation of the variable of interest implimented by the experimenter |  | 
        |  | 
        
        | Term 
 
        | Characteristics of:    Quasi-Experiments |  | Definition 
 
        | - Groups exist due to the main variable of interest being nominal or categorical -There is no manipulation done by the experimenter therefore the groups are naturally occurring -We cannot infer causality because there is no manipulation |  | 
        |  | 
        
        | Term 
 
        | Characteristics of:    Correlational Studies |  | Definition 
 
        | -Both variables of interest are quantitative therefore there are NO GROUPS -No Manipulation -Cannot infer causation  |  | 
        |  | 
        
        | Term 
 
        | Designs used to study development:   1. Longitudinal   2. Cross-Sectional |  | Definition 
 
        | 1. Same subjects are studied @ several points in their lives   2. participants of different ages are compared   **Age is central variable in all developmental research** |  | 
        |  | 
        
        | Term 
 
        | Types of Measures:   1. Direct Measures   2. Indirect Measures |  | Definition 
 
        | 1. Researchers themselves measure variables of interest (i.e. observations of behavior, physiological/neuro measures, electronic measures)   2. Participants, parents, peers, or teachers report on the variables of interest (i.e. self-reports, interviews, questionnaires, peer ratings) |  | 
        |  | 
        
        | Term 
 
        | Possible problems with indirect measures |  | Definition 
 
        | 
societal ideals/social desirabilitylogical consistency of answersblends of emotions/motivescomparison referent assumedlack of experience in judginginterpretations of questions may vary |  | 
        |  | 
        
        | Term 
 
        | Sources of behavioral variability: |  | Definition 
 
        | 
individual differencesContext: same person may behave differently in different situationstime/age |  | 
        |  | 
        
        | Term 
 
        | Central goal in Research is to: |  | Definition 
 | 
        |  | 
        
        | Term 
 
        | Understanding Behavioral variability: |  | Definition 
 
        | 
Design a study to examine whether a particular phenomena or events are related to a particular behaviormeasure variability in the behaviors of interestuse descriptive statistics to describe & summarize the variabilityuse inferential statistics to see if the phenomena/events are related to variability in the behavior |  | 
        |  | 
        
        | Term 
 
        | Techniques To Describe Variability   1. Histogram 2. Range 3. Inter-quartile Range 4. Standard Deviation 5. Variance |  | Definition 
 
        | 1. Graph   2. Highest minus the lowest score; info from only 2 scores; may be distorted by extreme scores   3. Scores that represent 25th quartile subtracted from the scores that represent the 75th quartile; eliminated problem with extreme scores   4. 'Average' distance of scores from the mean   5. Average squared deviation   |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | -Square root of the average squared deviation -Shows where most scores fall in a distribution -68% of the scores call within 1SD of the mean (above and below) -95% " fall within ~2SD of the mean - ~99% " fall within 3SD of the mean |  | 
        |  | 
        
        | Term 
 
        | Variance   1. Systematic Variance   2. Error Variance |  | Definition 
 
        | 1. Stems from the variable under study or variables being investigated -The portion of variability that is explained by research   2. Stems from all other variables that affect the variable of interest -Portion of variability that is due to unaccounted variables or outside factors unexplained by research |  | 
        |  | 
        
        | Term 
 
        | How to assess how much variability is explained:   1. In Correlation or Regression   2. In Experiments/quasi-experiments |  | Definition 
 
        | 1. r^2 2. Effect size & meta-analysis |  | 
        |  | 
        
        | Term 
 
        | 1. Psychological Constructs 2. Operational Definition |  | Definition 
 
        | 1. Psychological events that are hypothesized to exist, but are not directly observable   2. Procedures/definitions that are used to turn constructs that are invisible or unobservable into something that can be observed and measured -Defining a construct by specifying precisely how it is measured or manipulated in a particular study -allow us to observe a construct that cannot be measured (i.e. the psyche) |  | 
        |  | 
        
        | Term 
 
        | Operational Definitions for DIRECT measures |  | Definition 
 
        | 
specific operations or procedures used to define and measure the constructparticular behaviorscombination of procedures and behaviorsphysiological/neurological |  | 
        |  | 
        
        | Term 
 
        | Operational Definitions for INDIRECT measures |  | Definition 
 
        | 
Instruments are developed so that subjects and/or subjects' parents, teachers, and/or peers measure constructsquestionnaires, rating systems, or interviews |  | 
        |  | 
        
        | Term 
 
        | Nominal Measurement Scale Rules   |  | Definition 
 
        | 1. Categorical/Qualitative; different numbers or labels reflecting different things (i.e. sex, jersey numbers) |  | 
        |  | 
        
        | Term 
 
        | Ordinal Measurement Scale Rules |  | Definition 
 
        | 1. Quantitative 2. Different numbers (or labels) reflect different things 3. The Things being measured may be ordered or ranked |  | 
        |  | 
        
        | Term 
 
        | Interval Measurement Scale Rules |  | Definition 
 
        | 1. Quantitative 2. Different #'s or labels reflect different things 3. Things that are measured may be ordered or ranked 4. Distances or intervals between adjacent points of the scale are of equal value |  | 
        |  | 
        
        | Term 
 
        | Ratio Measurment Scale Rules |  | Definition 
 
        | 1. Quantitative 2. Different numbers or labels reflect different things 3. Things that are measured may be ranked or ordered 4. Distances or intervals between adjacent points of the scale are of equal value 5. There is an absolute or fixed zero point |  | 
        |  | 
        
        | Term 
 
        | Reliability 1. Indirect Measures 2. Inter-item Reliability 3. Split-Half Reliability 4. Chronbach's Alpha  5. Direct Measures |  | Definition 
 
        | -Refers to how precise/consistent the measure is   1. Tests & questionnaires: test-retest reliability (for test to be reliable, scores should remain the same)   2. Consistency among items on a scale designed to measure the same attribute, suchas conscientiousness or empathy   3. Divide all items into 2 groups & compute the degree of relationship between them   4. Similar to split-half, but computes average of all various split-half combos (coefficient of .70 is adequate reliability)   5. operationally defined behaviors; interrater reliability or observer agreement   |  | 
        |  | 
        
        | Term 
 
        | Validity 1. Face Validity 2. Content Validity 3. Construct (or convergent) Validity 4. Discriminant (or divergent) Validity 5. Criterion-Related Validity a. Concurrent b. Predictive |  | Definition 
 
        | - Refers to whether or not we are really measuring the underlying construct   1. Does the measure seem to get @ the underlying construct? 2. Does a learning test reflect what's been taught? 3. Is the measure correlated w/other theoretically related constructs? 4. Does the measure discriminate btwn the focal construct and other possible construcs? 5. Is the measure related to a relevant behavioral criterion? a. Is measure related to a theoretically related current behavior? b. Is measure related to a future behavior?   |  | 
        |  | 
        
        | Term 
 
        | 1. Sample 2. Population 3. Sampling Techniques |  | Definition 
 
        | 1. Subset of population chosen carefully to represent the population as a whole 2. a clearly defined group of people 3. Increase the likelihood that the sample is representative of population |  | 
        |  | 
        
        | Term 
 
        | Sampling Techniques:   1. Probability Sample A) Simple Random Sample B) Stratified Sample C) Cluster Sample   2. Problems with Probability Samples |  | Definition 
 
        | 1. Likelihood that a particular individual will be selected for the sample can be specified   A) Every sample in population has an equal chance of being chosen -How is it done? Names out of a hat/# assignment -Benefit: increases likelihood of getting individuals with different qualities/beliefs/views -Problem: Very rare because you would need the names of everyone in a population in order for them to all have the same chance of being picked.   B) Used when you want to make sure ppl. with specific characteristics are included in the sample (i.e. election polls) -Divide population into strata or categories based on a particular characteristic that may affect what you're studying (e.g. voting behavior: Age, socio-eco status, race, gender, location)   C)  used when pts. can be obtained more easily in groups than individuals (groups or clusters are randomly selected)   2. Non-Response Bias: probability samples are representative only if every one who is sampled responds -How to deal with ^: think of ways to encourage participation & compare demographic info btwn respondants and non-respondants |  | 
        |  | 
        
        | Term 
 
        | Sampling Technique:   Non-Probability Samples |  | Definition 
 
        | the likelihood that a particular individual will be selected cannot be specified, as there is no way to know this |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | estimation of parameters from standard deviation |  | 
        |  | 
        
        | Term 
 
        | Causation 1. Dictionary Definition 2. Ordinary Definition |  | Definition 
 
        | 1. the act or agency that produces an effect   2. one event (the cause) generates another event (the effect) |  | 
        |  | 
        
        | Term 
 
        |  Mill's Cannons explain how causality is inferred   1. Method of Agreement   2. Method of Difference      |  | Definition 
 
        | 1. Identifies causation by observing the common elements in several instances of an event Problem: You can identify the wrong common element   2. Identifies causation by observing the different groups/situations created that are alike in every respect, EXCEPT ONE -How do groups differ? Level of independent variable -How are " alike? Every other aspect that may affect outcome |  | 
        |  | 
        
        | Term 
 
        | 1. How to create groups that are alike in every respect but 1?   2. Independent Variable 3. Dependent Variable |  | Definition 
 
        | 1. Researcher creates groups that are equivalent in all respects except for the INDEPENDENT variable   2. Variable thought to affect some aspect of subjects' behavior, thoughts, or feelings    3. Subjects' responses (behaviors, physiological measures, electronic measures, etc.) that are thought to be induced by the independent variable |  | 
        |  | 
        
        | Term 
 
        | Types of IVs:   1. Environmental 2. Instructional3. Invasive
 4. Subject  |  | Definition 
 
        | 1. Surrounding forces that affect behavior (i.e. temperature)   2. Instruction or information that participants receive which affect behavior    3. Involve creating physical changes in participants' body through something entering the body   4. Variable that is subjective to an individual   **Note: If the ONLY nominal variable is a subject variable then it is always a quasi-experiment** |  | 
        |  | 
        
        | Term 
 
        | 1. Goal of Research Methods (in terms of variability)   2. Correlational studies/Quasi-experiments   3. Experiments |  | Definition 
 
        | 
Measure differences, test hypotheses about how to explain themmeasure variability as it naturally occursattempt to cause variability or changes in ppl's behaviors |  | 
        |  | 
        
        | Term 
 
        | 1. Variance a) Systematic Variance b) Error Variance |  | Definition 
 
        | 1. The average squared deviation; represent two types of variability:   a) Stems from the variable under study or variables being investigated -the portion of variability that is explained by research   b) stems from other variables that affect the variable of interest -portion of the variability that is due to unaccounted variables or outside factors unexplained by research |  | 
        |  | 
        
        | Term 
 
        | How to assess how much variance is explained   1. In correlation or regression 2. in experiments or quasi-experiments |  | Definition 
 
        | 1. r^2 2. Effect size / meta-analysis |  | 
        |  | 
        
        | Term 
 
        | 1. Sample -symbols for mean/SD/Variance   2. Population -symbols for mean/SD/Variance |  | Definition 
 
        | 1. Subset of population chosen carefully to represent the population (mean = Mew/ SD = sigma / Variance = sigma2)   2. A clearly defined group of people (mean= Xbar or M/ SD = S/ Variance = S2 |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | estimation of parameters from standard deviation |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | we can infer causation because the variable of interest is manipulated by experimentor ==> providing control for all other variables that may affect outcome & ruling out reverse causality & third variable causality |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | Post-experiment; check to make sure the manipulation of variable of interest was successful |  | 
        |  | 
        
        | Term 
 
        | How to choose levels of IV |  | Definition 
 
        | -Control groups -Baseline measures -Treatment vs. control group (compares treatment with baseline measures) |  | 
        |  | 
        
        | Term 
 
        | Assigning participants to conditions (initially equivalent) |  | Definition 
 
        | 
Simple random assignment (btwn subjects or independent groups)Matched random assignment (matched pairs)Repeated measures (within subjects) |  | 
        |  | 
        
        | Term 
 
        | Threats to Internal Validity   Common Extraneous Variables 1. Selection 2. Attrition 3. Instrumentation 4. Statistical Regression |  | Definition 
 
        | 1. When diff. selection procedures are used to place subjects in the various experimental conditions   2. Loss of participants, particularly problematic when this occurs more in one condition than another   3. Getting used to certain instruments used to collect your data (As a researcher gets better @ procedure, this leads to more accurate data)   4. If you measure same variable twice, the extreme scores (high & low) will be in opposite directions because they regress toward the mean |  | 
        |  | 
        
        | Term 
 
        | Threats to Internal Validity   Repeated Measures Design 1. Order Effects 2. Carryover Effects |  | Definition 
 
        | 1. When subjects perform same task in a predictive order, it affects performance Solution: counterbalance order of conditions   2. Initial performance affects performance for later conditions. One treatment contaminates another Solution: switch from repeated measures to independent groups or matched pairs |  | 
        |  | 
        
        | Term 
 
        | Threats to Internal Validity   The Experiment as a Social Situation  1. Experimenter Effects (bias) 2. Subject Effects (bias) |  | Definition 
 
        | 1. When researcher gives off subtle cues that may affect participants' performance and/or overall outcome Solution: Make experimenter blind to the condition of pts. -Computerized instructions for pts.   2. Individual biases of pts that may affect performance/overall outcome |  | 
        |  | 
        
        | Term 
 
        | 1. Experimental Design 2. Variance 3. Confound 4. Extraneous Variable |  | Definition 
 
        | 1. For experiments/quasi-exps, you MUST identify the design in order to know which statistical test to use   2. average squared deviation => exaggerates the variance between ppl.   3. extraneous variable that varies w/IV => causes groups to be alike in more than 1 aspect   4. Variable other than IV that may affect outcome/DV |  | 
        |  | 
        
        | Term 
 
        | 1. Systematic Variance   2. Systematic Error |  | Definition 
 
        | 1. Treatment variance + confound variance   2. Confound variance -Control by creating 2 groups alike in every respect but 1 (the levels of IV) thru random assignment, Matched pairs, or repeated measures   - once controlled, statistical tests used to compute probability that results are due to random error   -If probability of random error is low, & confounds are eliminated, the conclusion can be drawn that results are likely due to IV |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | 1. # of Independent variables 2. # of levels of each IV 3. How subjects are assigned to each level (random, independent groups, repeated measures) |  | 
        |  | 
        
        | Term 
 
        | What causes random error?   1. Independent Groups 2. Repeated Measures |  | Definition 
 
        | 1. Due to individual differences within groups (from one person to the next)   2. Due to differences within individuals from one time to another |  | 
        |  | 
        
        | Term 
 
        | Simple Experiments/Quasi-experiments (Comparing Sample to Population) -Statistical test: |  | Definition 
 
        | -One IV w/2 levels -One Sample Z & t tests |  | 
        |  | 
        
        | Term 
 
        | Simple Experiments/Quasi-Experiments  (sample only/no population info)   1. Independent Groups statistical test:   2. Dependent Samples Statistical test: |  | Definition 
 
        | 1. -Naturally occuring(quasi-exp) -Random assignment (exp) -Independent groups t-test   2. -Matched Pairs -Repeated Measures -Dependent Samples t-test |  | 
        |  | 
        
        | Term 
 
        | Complex experiments/Quasi-experiments |  | Definition 
 
        | 1. One IV w/3 or more levels   2. More than one IV w/ at least 2 levels for each |  | 
        |  | 
        
        | Term 
 
        | Complex Designs   1. One independent variable 2. Two or more independent variables |  | Definition 
 
        | 1. Single factor or one-way design (either independent groups or repeated measures)   2. Factorial Design -Two-factor (independent, repeated, or mixed) -Three-factor (independent, repeated, or mixed) -Etc. |  | 
        |  | 
        
        | Term 
 
        | Single-factor independent groups ANOVA Source Table |  | Definition 
 | 
        |  | 
        
        | Term 
 
        | Two-factor Independent Groups ANOVA Source table |  | Definition 
 | 
        |  | 
        
        | Term 
 
        | Single-Factor Repeated Measures ANOVA Source Table |  | Definition 
 | 
        |  | 
        
        | Term 
 
        | Controlling for Error   1. Systematic Error 2. Random Error |  | Definition 
 
        | 1. Control by creating two groups alike in every respect except one: -Create equivalent groups/conditions (techniques for assigning pts. to conditions) that are alike in every respect except one (the level of the independent variable/eliminate confounds)   2. Use statistical tests to find the probability that difference between conditions is due to random error |  | 
        |  |