Term
|
Definition
Changes in one variable are directly responsible for changes in another True experiments attempt to establish cause and effect relationships by: 1. Manipulation 2. Measurement 3. Comparison (Statistics) 4. Control |
|
|
Term
Independent Variable (IV) |
|
Definition
The variable being manipulated by the researcher Treatment Conditions- Situation or variable characterized by one specific value of the manipulated variable Levels- The different values of the IV selected to create and define the treatment conditions |
|
|
Term
Cause and Effect Relationships |
|
Definition
Causation and the third-variable problem The summertime murder rates/ ice cream sales study Is there a confounding variable? Causation and the directionality problem Which one causes the other? Controlling Nature Create an unnatural situation where variables exist in isolation |
|
|
Term
Distinguishing Elements of an Experiment |
|
Definition
General Goal: Establish cause-and-effect Demonstrate that changes in one variable causes changes in the other Rule out the possibility of a third variable causing changes Tools utilized to reach goal Manipulation Control |
|
|
Term
Dealing with Extraneous Variables |
|
Definition
Control prevents and extraneous variable from becoming a confounding variable Confounds have two important characteristics: Influences the DV Vary systematically with the IV Types of extraneous variables: Environmental variables- Different rooms, weather, temperature, lighting Participant variables- Gender, age, IQ, family structure Time-Related variables- Fatigue, timing of study |
|
|
Term
|
Definition
Posttest-Only Designs Pretest-Posttest Designs Assigning Participants to Conditions Independent Groups Design |
|
|
Term
|
Definition
Two equivalent groups of participants Introduction of the IV Measurement of the effect of the IV on the DV |
|
|
Term
|
Definition
A pretest is given to each group prior to introduction of the experimental manipulation Assures that groups are equivalent at the beginning of the experiment Can quickly measure changes that occur from the pretest to the posttest |
|
|
Term
Advtanges/ Disadvantages Pretest- Posttest Design |
|
Definition
Advantages Can evaluate Attrition/ Mortality ( dropout factor) Assess equivalency of groups with small sample size Can be used to sleet participants for the experiment Disadvantages Time consuming and awkward to administer Sensitizes participants to what is being studied Demand characteristics Reduces external validity |
|
|
Term
The Experimental Research Strategy |
|
Definition
Goal: Establish a casue and effect relationship Four characteristics: 1. Manipulation 2. Measurment 3. Comparison 4. Control |
|
|
Term
|
Definition
Different groups of scores are all obtained from the same groups of participants |
|
|
Term
|
Definition
Differnt groups of scores all obtained from seperate groups of participants |
|
|
Term
Controlling Extraneous Variables |
|
Definition
Two active control techniques: 1. Holding a variable constant Standardized environment and precedes, specific sample May limit external validity 2. Matching values across treatment conditions Same number (each gender/age) in each condition Same average score in each condition Vary order of treatments Time consuming, tedious One passive control technique 3. Randomization Simpler technique Using a random process to disrupt the natural/ systematic relationship between two variables |
|
|
Term
If you are investigating test performance and self-esteem: |
|
Definition
Manipulate IV Administer a general knowledge test- one with obviously difficult questions and one with obviously easy questions Extraneous Variables Gender, confusion, depression, hair color What might be considered confounds? What? Why not? Confusion (Varies with condition, might effect DV) Gender and depressions ( Might effect DV- no systematic variation on IV) Hair Color (No effect on DV, no systematic variation on IV |
|
|
Term
Comparison: Control Groups |
|
Definition
Experimental group: treatment condition Control group: no-treatment condition 1. No treatment control group Do not receive the treatment being evaluated Creates a standard of normal behavior (baseline) 2. Placebo control group Receive an inert/ innocuous medication instead of the actual treatment |
|
|
Term
|
Definition
An additional measure to assess how participants perceived and interpreted the manipulation and/or to assess the direct effect of the manipulation 1. Explicit measure of the IV (mood questionnaire) 2. Ask questions after participation ( participants)
Particularly useful when there are: Participant manipulations Subtle manipulations Simulations Placebo Controls |
|
|
Term
Between- Subjects Experimental Designs |
|
Definition
A.K.A Independent measure Experimental Design Requires a separate, independent group of individuals for each treatment condition Data contains only one score per participant Manipulation of an IV Control of extraneous variables Goal: to determine if differences exist between two or more treatment conditions |
|
|
Term
Between-Subjects Design Adv/Disad |
|
Definition
Advantages Each score is independent of other scores Useful for any research comparing treatment groups Disadvantages Requires many participants Individual differences (extraneous variables) Can become confounding variables Can result in highly variable scores |
|
|
Term
Individual Differences as Confounds |
|
Definition
Groups should be equivalent except for the level of the IV Assignment bias Environmental variables Equivalent groups are Created equally Treated equally Composed of equivalent individuals |
|
|
Term
|
Definition
Random assignment Restricted random assignment Holding variables constant Restricting range of variability Matched groups (assignment) |
|
|
Term
Other Threats to Internal Validity |
|
Definition
In addition to assignment bias (individual differences) and confounding from environmental variables) Differential Attrition Mortality differences between groups —> groups are no longer equal Communication between groups Diffusion Compensatory equalization Compensatory rivalry Resentful demoralization |
|
|
Term
Two-Group Mean Difference |
|
Definition
Simplest version of design Single-factor two group design or two group design Requirements IV- Two groups (nominal) DV- One ratio (numerical) Analysis Independent measures t-test Question Is there a significant difference between the means of each group? Advantages Easy interpretation Can maximize differences between treatment conditions Disadvantages Provides relatively little information Limits control group options |
|
|
Term
Within-Subjects Experimental Designs |
|
Definition
A single group of participants undergoes all of the different treatment conditions Each individual is tested/observed in all treatment conditions This is why its called repeated-measures design |
|
|
Term
Between Groups (Disadvantages) |
|
Definition
Requires many people Individual differences between groups can become a confounding variable Alternate explanation Individual differences in each treatment can create high variance Obscures difference |
|
|
Term
Disadvantages of Within- Subjects Design |
|
Definition
Threats to internal validity Confounding from environmental variables Confounding from time-related factors History/ Maturation/ Instrumentation/ Testing effects/ Regression Participant Attrition |
|
|
Term
|
Definition
Not directly connected to participating in a previous treatment History/ maturation |
|
|
Term
|
Definition
Directly related to experience obtained by participating in a previous treatment Carryover Effect: Changes in behavior or performance caused by the lingering, after-effects of an earlier treatment condition Progressive Error- Changes in behavior or performance related to general experience in a research study but not related to a specific treatment or treatments Practice effect, Fatigue effect |
|
|
Term
Order Effects as a Confounding Variable |
|
Definition
Hypothetical Example: The treatment produces no significant change (only measurement error)- but the order effect adds 5 points Two important points: 1. Order effect varies systematically with the IV 2. Data incorrectly appear to indicate a significant increase in scores |
|
|
Term
|
Definition
Controlling Time Shorten time between conditions Increase likelihood of order effects Switching to between-subjects When strong order effects are expected to exist Counterbalancing Changing the order in which treatment conditions are administered from one participant to another so that the treatment conditions are matched with respect from time |
|
|
Term
Counterbalancing and Order Effects |
|
Definition
Evenly distributed order effects Does not eliminate them! |
|
|
Term
Limitations to Counterbalancing |
|
Definition
May distort treatment means Variance Changes within-treatment variance if order effects are present Asymmetrical order effects Order effects may not take place evenly Number of treatments Complete Counterbalancing- requires every possible permutation of treatment presentation Partial Counterbalancing- uses enough different orderings to ensure each condition occurs first, then second |
|
|
Term
|
Definition
Simple and unbiased procedure For 4 treatment conditions use a 4 X 4 matrix Next row: move last letter in first line to beginning Next row: ditto |
|
|
Term
|
Definition
Within-subjects designs are most preferred when: 1. The population has rare characteristics and are more difficult to recruit Fewer subjects are required 2. The population is expected to exhibit a large amount of variability Reduces or eliminates individual differences |
|
|
Term
A note on individual differences |
|
Definition
While within-subject designs have fewer individual differences in comparison with between subject designs: The design does not eliminate them The design allows us to be able to measure and remove them with statistical analysis, but they do exist |
|
|
Term
Two-Treatment Within-Subject Designs |
|
Definition
Advantages Easy to conduct Results are easily interpreted Differences can be maximized Easy to completely counterbalance
Disadvantages Does not provide a complete picture of the relationships between variables Two points of data Analysis: Interval/ Ratio DV: Repeated measures t-test Are treatment means significantly different? |
|
|
Term
Multiple Treatment Designs |
|
Definition
Advantages More likely to reveal functional relationship More convincing cause-and effect claim
Disadvantages Differences between groups may be too small to find significance Increased likelihood of attrition Difficult to completely counterbalance
Analysis: Interval/ Ratio DV: Repeated Measures ANOVA Are any of the treatment means significantly difference from each other? If so, where? |
|
|
Term
Within Subjects vs. Between Subjects Designs |
|
Definition
1. Individual differences Problem: May become confounding variables or increase variance Solution: Within-subjects design reduce this variance 2. Time-related factors and order effects Problem: May distort results Solution: Between-subjects designs only measure each individual once 3. Number of participants Problem: Some populations are hard to recruit or costly Solution: Within-subjects designs require fewer participants |
|
|
Term
|
Definition
Each individual in one group, is matched with a participant in each of the other groups, with respect to a variable considered to be relevant to the study - Does not have identical (within) subjects, but equivalent -No order or time effects -Reduces some individual differences |
|
|
Term
When Selecting Research Participants |
|
Definition
Samples may be drawn from the population using Probability Sampling vs. non-probability sampling Sampling must assure external validity to generalize to other populations Determine the needed sample size Power analysis (Ch 13- statistical analyses) Larger samples provide more accurate estimates of population values |
|
|
Term
Straightfoward Manipulation |
|
Definition
Most studies utilize straightforward manipulation, if any Levels may be different types of stimuli or environment Easy to interpret Milgrams experiment: presence/ absence of authority figure |
|
|
Term
Staged Manipulation ( A.K.A. Event Manipulation) |
|
Definition
Can get complicated Use of a confederate or accomplice Designed to get participants involved in “real” experience May be difficult to replicate Authority figure present and heart patient “learner” |
|
|
Term
Manipulation Strength ( Example) |
|
Definition
Strong manipulation —> maximizes differences between groups No treatments vs. full treatment
Study on Attitude Similarity and Liking Do birds of a feather flock together? IV: Similarity, Dv: liking 1,1= least similarity, least liking 10,10= most similarity, most liking A strong manipulation would be comparing 1, 1, to 10,10 |
|
|
Term
|
Definition
Limited $$ equals Limited use of equipment and supplies Limited salaries for confederates Limited incentives for participants Limited space to conduct experiments …..Which is why most use straightforward manipulation |
|
|
Term
Self Report and Behavioral Measures |
|
Definition
Multiple measures of a variable provies a better picture than a single measure Self Report -Asking participants to report on themseleves -Quesionnaire on activities of daily living Behavioral Direct observations of behaviors Rate: how many times does it occur Duration: how long does the behavior last? Reaction Time: how quickly response occurs after stimulus |
|
|
Term
Difference between Quasi Experiment and Experiment |
|
Definition
You are not manipulating!!! Ex. Gender!!!! You do not have FULL manipulation of independent variable Self Selected |
|
|
Term
|
Definition
Galvanic Skin Response (GSR) Electrical conductance of the skin, which changes when sweating occurs Measures general emotional arousal and anxiety Electromyogram ( EMG) Muscle tension Measures tension or stress Electroencephalogram (EEG) Electrical activity of brain cells Measures brain arousal in response to different stimuli Function MRI (fMRI) Measures activation of brain regions in response to stimuli |
|
|
Term
Sensitivity of the Dependent Variable |
|
Definition
The DV must be sensitive enough to detect differences between groups Ceiling Effect- The independent variable appears to have no effect on the dependent measure because participants all reach the maximum performance level Floor Effect- The opposite of the ceiling effect; the task is too difficult and all participants perform poorly |
|
|
Term
|
Definition
Paper and pencil surveys- inexpensive Cost of reproduction and writing implements Video-taped interviews- expensive Cost of equipment Cost of multiple raters to review and rate observations fMRI- very expensive Training for administrator, interpreter, observer Facilities Equipment |
|
|
Term
Controlling for Participant Expectations |
|
Definition
Demand Characteristics Using unrelated filler items on a questionnaire or otherwise “clouding” the true intention of the study/measurement Placebo Effects Using a placebo group to assure external validity is maintained Participants in the experimental group should show greater performance than those in the placebo group If placebo participants performs the same as experimental participants= evidence of placebo effect |
|
|
Term
Controlling for Experimenter Expectations |
|
Definition
Expectancy effects: Clever Hans Was he really that clever Well, yes- he used eye cues to start and stop tapping Solutions to the expectancy problem Single blind experiment- participant unaware of placebo Double blind experiment- participant and experimenter both unaware of condition |
|
|
Term
Additional Considerations |
|
Definition
Research Proposals Feedback on the quality of your research Pilot Studies Trial run of your study with small number of participants Manipulation Checks Direct measurement of whether the manipulation had intended effects on participants Mood measure for mood lighting Debriefing Discussion of ethical and educational implications of the study |
|
|
Term
Analyzing and Interpreting Results |
|
Definition
Statistical analysis of the data you collected Examine and interpret the pattern of results Revisit you hypothesis and determine the relationships between the IV and DV |
|
|
Term
|
Definition
Professional Meetings APA Annual Meeting APS Annual Metting Local conferences Brown bag lunch meetings Journal Articles Peer review Almost 90% rejection rate |
|
|
Term
|
Definition
-Employs questionnaires and interviews to ask people to provide information about themselves Attitudes and beliefs Demographics Past or intended future behaviors -Rests on an assumption that people are willing and able to provide truthful and accurate answers |
|
|
Term
Why Conduct Survey Research? |
|
Definition
Provides methodology for asking people to tell about themselves To study relationships between/among variables To study how attitudes and behaviors change over time Provides useful information for making public policy decisions Important complement to experimental research findings |
|
|
Term
|
Definition
A tendency to respond to all questions from a particular perspective rather than to provide answers that are directly related to the questions -Social desirability, or “faking good” -Most acute when questions concern a sensitive topic Violence to aggressive behavior Illicit behavior Sexual practices However, not everyone misrepresents themselves |
|
|
Term
Defining the Research Objectives |
|
Definition
The survey questions must be tied to the research questions that are being addressed Attitudes and beliefs Questions focus on the ways that people evaluate and think about issues Facts and demographics Factual questions ask people to indicate things they know about themselves and their situation Behaviors Questions can focus on past behaviors or intended future behaviors |
|
|
Term
Question Wording- Ensure Simplicitiy |
|
Definition
The questions asked should be relatively simple People should be able to easily understand and respond to the questions |
|
|
Term
|
Definition
Jargon and technical terms people won’t understand Double-barreled questions that ask two things at once Loaded questions include emotionally charge words and may influence response Negative wording |
|
|
Term
Question Wording- Potential problems that stem from difficutly understanding the question |
|
Definition
Unfamiliar technical terminology Vague or imprecise terms Ungrammatical sentence structure Phrasing that overloads working memory Embedding the question with misleading information |
|
|
Term
Closed vs. Open Ended Questions |
|
Definition
What is your ethnicity? _________ vs. What is your ethnicity( Check one) __white __Black With closed-ended questions, there are a fixed number of response alternatives |
|
|
Term
|
Definition
Rating scales ask people to provide “how much” judgments on any number of dimensions Graphic Rating Scale ( Mild, Moderate, Severe) Semantic Differential Scale (1-10) Non-verbal Scales for Children (Smiley/ Frowny faces) |
|
|
Term
Finalizing the Questionnaire |
|
Definition
Formatting Should appear attractive and professional Neatly typed and free from errors Use point scales consistently Refining Questions Proof questions with others Pilot test the survey with a small group |
|
|
Term
|
Definition
Personal administration to groups or individuals Presented in written format and respondents write their answers Inexpensive and anonymous Mail Surveys Relatively inexpensive to administer Internet Surveys Easy to design and administer Other Technologies Computerized experience sampling via Palm Pilots first, now iPhones |
|
|
Term
|
Definition
Face to face interviews Time, facilities, transportation costs Telephone interviews Data collected relatively quickly and at less cost Focus group interviews Expensive, but yields good information Problem- Interviewer Bias |
|
|
Term
Survey Designs to Study Changes Over Time |
|
Definition
Questions are the same each time Track changes over time Panel Study |
|
|
Term
Sampling From a Population |
|
Definition
Confidence Intervals Level of confidence that the true population value lies within an interval of the obtained sample Provides info about the likely amount of sampling error, or margin of error Sample Size A larger sample size reduces the size of the confidence interval Must consider the cost/ benefit of increasing sample size |
|
|
Term
|
Definition
-Simple random sampling Each member of the population has an equal probability of being selected -Stratified random sampling Population divided into subgroups and random samples taken from each strata -Cluster sampling Identify clusters and sample from the ree clusters |
|
|
Term
Non Proability Sampling (Unknown probability of any member being chosen) |
|
Definition
-Haphazard Sampling Convenience sampling- a take em where you find em approach for obtaining participants -Purposive Sampling Sample meets predetermined criterion -Quota Sampling Sample reflects the numerical composition of various subgroups in the population |
|
|
Term
|
Definition
The actual population of individuals from which the sample is drawn Rarely will this perfectly coincide with the population of interest |
|
|
Term
|
Definition
Percentage of respondents who complete the survey If 1,000 questionnaires were mailed out and 500 are completed and returned, the response r rate was 50% |
|
|
Term
|
Definition
You are conducting a survey about peoples beliefs about the relationship, if any, between family support and success in college Group 1: Write 5 open-ended questions Group 2: Write 5 closed- ended questions
1. Is there a relationship between family support and success in college 2. What factors are important influences on a persons choice of movie? 3. In a heterosexual couple, does the male or female usually determine the events of an evening out |
|
|
Term
Increasing the # of Levels of an IV |
|
Definition
Provides more information about the relationship than a two-level design Can provide us with a curvilinear relationship Comparing two or more groups How dogs, cats and birds have beneficial effects on nursing home residents ` As opposed to just cats and dogs |
|
|
Term
|
Definition
Tests a claim about a population mean with a sample mean (0 unknown) |
|
|
Term
Independent-measure t-statistic- |
|
Definition
Tests a claim about the difference between two population means by using two samples and evaluating their mean difference |
|
|
Term
Rogers, Kuiper & Kirker (1977) |
|
Definition
An experiment demonstrating the effect of levels of processing (superficial to deep) on memorization of events All participants were given a surprise memory test They were not told beforehand that they needed to memorize the words on the list |
|
|
Term
|
Definition
Tests claims about mean differences between two or more populations by using two or more samples The tested hypothesis is very similar to the independent measures t statistic but now can be applied when we have more than two groups |
|
|
Term
A Typical Example of a Situation Requiring ANOVA |
|
Definition
T-tests only allow the comparison of 2 means at a time This would require 3 separate t-tests with 3 separate a levels, which accumulate over a series of tests ANOVA allows the evaluation of all three means at once |
|
|
Term
|
Definition
Independent Variable : Telephone conditions :Three treatment conditions are created by the researcher Quasi- Independent Variable: Age: Non0manipulated variable used to create groups |
|
|
Term
Statistical Hypotheses for ANOVA- Null |
|
Definition
All populations have the same mean Telephone condition has no effect on driving performance |
|
|
Term
Statistical Hypotheses for ANOVA- Alternative |
|
Definition
H1: There is at least one mean difference among populations At least one of the population means is different from another ( no specific decision which or how they differ) Not all treatment conditions are the same, there is a treatment effect somewhere |
|
|
Term
Why Variance vs. Mean Differences? |
|
Definition
When we have more than two sample means, how do you evaluate a mean difference? Its problematic However we can evaluate the variance between these three means |
|
|
Term
Type 1 Errors and Multiple-Hypothesis Tests (Will be asked) |
|
Definition
Why not just use multiple t-tests? Each time we conduct a t-test, we risk a Type 1 error The level of risk for making a Type 1 error is set by our a -level For an IV (telephone conditions) with three levels ( none, hands free, hand held) we would need three separate t- tests Each test compounds the chance of making an experiment- wide Type 1 error With ANOVA we maintain an a=.05 |
|
|
Term
|
Definition
These scores are all different. Or, in statistical terms, these scores are all variable Our goal is to measure the amount of variability to explain why the scores are different Three Steps 1. Calculate between- treatments variance Here we see that there is a large variance between sample means 2. Calculate within-treatment variance There is some variability in scores within each sample 3. Calculate the F-ratio to compare the two variances |
|
|
Term
|
Definition
Between Treatments Variance “Good” Variance -This is the variance we are interested in Differences between samples -Possible sources of variance Differences caused by treatment Random Chance -How do participants in different conditions differ?
Within-Treatments Variance “Bad” Variance -Variance due to chance and sampling error “noise” -How big are the differences between individuals when there is no treatment effect? -How do the participants within the same condition differ? |
|
|
Term
The F-ratio ANOVAS Test Statistic |
|
Definition
F= Variance between treatments/ variance within treatments Treatment effect + differenced due to chance/ Differences due to chance The denominator is our error term ( variance expected due to chance) When the treatment effect is zero ( we cannot reject the null), the F-ratio approximates 1 |
|
|
Term
Research Designs For ANOVA |
|
Definition
Independent -measures design Uses a separate group of participants for each of the treatment conditions being compared Repeated-measures design Uses one group of participants for all of the treatment conditions Two-factor ( or Factorial) design 2 or more IV’s |
|
|
Term
Research Designs with Two-Factors |
|
Definition
Two IV’s, each their own levels Creates a research design, which allows us to look at every possible condition Summarized in a matrix, where we would require a separate sample for each cell Factor A : Self Esteem - Low or High Factor B : Audience Condition - Audience or No Audience 2 X 2 Anova |
|
|
Term
A Two Factor Research Study |
|
Definition
For example, a researcher studying the effects of heat and humidity on performance could use the following experimental design Factor A Humidity- Low/ High Factor B: Temperature - 80,90,100 2 X 3 Anova |
|
|
Term
What does the 2 factor ANOVA do? |
|
Definition
Evaluates three separate sets of mean differences 1. Mean difference between the two humidity levels 2. Mean differences between the three temperature levels 3. Any other mean differences that may result from unique combinations of a specific humidity level and a specific temperature level |
|
|
Term
Interpretation of Factorial Designs |
|
Definition
Main Effects The effect of each independent variable taken by itself There is a “main effect” for each variable in your design Interactions Effect of one IV depends on the particular level of the other IV There is only one interaction in a two-factor design |
|
|
Term
|
Definition
1. Main effect for Factor A (Gender: Quasi-IV) 2. Main effect for Factor B (Drug, IV) 3. Interaction between Factors A and B (The effect of the drug depends on gender) -Main Effects- The mean differences among levels of one factor -Interactions- When the mean differences between individual treatment conditions are different from what would be predicted from the overall main effects of the factors |
|
|
Term
What Means Are We Evaluating? |
|
Definition
Main Effect for Factor A ( Gender) Compares the means for males (60) and for females (80) Main Effect for Factor B (Audience) Compares the means for Drug A (70) and Drug B (70) Interaction (Self-esteem x Audience) Evaluates how the means for the levels for gender (Factor A) are influenced by drug (Factor B) |
|
|
Term
Why are Interactions Important? |
|
Definition
If the DV is depression scores, consider what conclusions you might find considering main effects alone Females have higher depression scores than men The drug does not have any effect However, if you consider the interaction between these two factors The drug seems to be having a different effect for males than it does for females |
|
|
Term
|
Definition
F= Variance ( mean differences) not explained by main effects/ Variance (differences) expected by chance/error -If the two factors are independent ( do not influence the effect of each other) there will be no interaction |
|
|
Term
Graphical Illustration of an Interaction |
|
Definition
When there is an interaction, plotting the means of the levels from the factors produces nonparallel lines |
|
|
Term
Interactions and Moderator Variables |
|
Definition
A moderator variable influences the relationship between the two other variables Influence of carrot consumption on the relationship between age and blood pressure |
|
|
Term
|
Definition
Research Hypothesis- Misleading questions results in more errors in eyewitness testimony than do unbiased questions Maybe the type of questioner (the moderator variable) influences this relationship |
|
|
Term
Why We Love the two-factor ANOVA |
|
Definition
Allows us to examine three types of mean differences within one analysis Three hypotheses, each with its separate F- ratio, which we are familiar with |
|
|
Term
Independent Groups Design |
|
Definition
Different group of participants assigned to each of the different conditions For a 2 X 3 design, we need 6 groups |
|
|
Term
|
Definition
The same individuals will participate in all conditions For this 2 X 3 design, we would need one more group |
|
|
Term
|
Definition
Use both repeated measures groups and independent groups For this example we would need 2 groups |
|
|
Term
Increasing the # of our IV’s or Levels of IV’s |
|
Definition
2 X 2= Simplest design 2 IVs with 2 levels in each IV We can increase the levels of an IV 2 X 3= Slightly more complex 2 IVs with 2 levels in one IV, 3 levels in the other IV We can increase the number of IVs 2 X 2 X 2= Even more complex 3 IV’s with 2 levels in each IV 2 X 2 X 3=? |
|
|
Term
Measuring Effect Size for ANOVA |
|
Definition
A significant difference indicates our difference is larger than expected by chance, but does not tell us how large this difference is N2= The percentage of variance accounted for N2= SSbetween/ SStotal |
|
|
Term
What if I have a significant F? |
|
Definition
Reject the null hypothesis. There is a significant difference somewhere between at least two of thee groups (not all groups are equal) But I don’t know where the difference is If i have 3 groups: Group 1 could be different than B or Group B from C, Or Group A from C So if there is a significant F-ratio, more tests must be done to in which groups are different |
|
|
Term
|
Definition
Additional hypothesis stets that are done after an ANOVA results in a significant F to determine exactly which mean differences are significant and which are not -Finds where the significant differences are -Developed to control the experiment-wise error rate |
|
|
Term
|
Definition
Tells us the minimum difference between treatment means that is necessary for significance q: “Studentize range statistic” found in Table B.5 Must know k, df within treatments and set a n: number of scores in each treatment (requires equal sample sizes in all groups) |
|
|
Term
|
Definition
More conservative than Turkeys HSD Uses the ANOVA formula, but only compares two groups at a time |
|
|
Term
Single-Case Experimental Design- (A.K.A. Single-subject Design) |
|
Definition
Use results from a single participant to establish the existence of cause and effect relationships -The impact of an experimental manipulation on one participant 1. Behavior is measured over time during a baseline control period 2. Manipulation is introduced during treatment period 3. Change in behavior between baseline and treatment assessed -Useful for applied research Clinical, counseling, educational |
|
|
Term
|
Definition
Phase= A series of observations made under the same conditions Minimum of 3 observations Baseline phase (A) A series of baseline observations (no treatment is being administered) Treatment phase (B) A series of treatment observations ( a treatment is being administered) |
|
|
Term
Evaluating Graphs (Evaluating phase patterns) |
|
Definition
Level- Magnitude of participants responses Trend- Increase/decrease in magnitude of behavior Stability- The degree to which the level or trend is consistent throughout the phase |
|
|
Term
How to deal with unstable data |
|
Definition
1. Wait- the pattern may stabilize 2. Average sets of observations 3. Look for patterns within the inconsistencies |
|
|
Term
|
Definition
Once a pattern is established Change the conditions (usually by administering or stopping a treatment) Expected to produce a noticeable change in behavior Unique considerations: Does the baseline show a trend? Does the baseline indicate dangerous behavior? Does implementing the treatment severely change behavior? |
|
|
Term
How to evaluate a change in pattern |
|
Definition
Change in average level Immediate change in level Change in trend Latency of change |
|
|
Term
Reversal Designs: The ABA Design |
|
Definition
Baseline (A)-> Treatment phase (B) -> Baseline phase (A) Each B phase pattern is clearly different from each A phase pattern A method to demonstrate the reversibility of the manipulation A.K.A Withdrawal Design Example- The use of parse as a treatment to measure the improvement of a child’s school performance Measure test scores (A) Give regimen of praise for correct homework problems (B) Measure test scores (A) |
|
|
Term
Reversal Designs: The ABAB Design |
|
Definition
Baseline (A)-> Treatment (B)-> baseline (A)-> treatment (B) Changes from phase A->B and B-> A are consistent An extension of the ABA design Eliminates the possibility that change is due to chance function Addresses ethical issue of ending a design with withdrawal of treatment |
|
|
Term
Multiple Baseline Designs |
|
Definition
Change is observed under multiple circumstances The manipulation is introduced at different times to determine that the manipulation caused change -Two simultaneous baseline phases (A)-> Treatment phases (B) are implemented at different times for each baseline -Only 1 phase change 1. Multiple baseline across subjects Initial baseline phases correspond to two separate participants 2. Multiple baseline across behaviors Initial baseline phases correspond to separate behaviors for the same participant 3. Multiple baseline across situations Initial baseline phase correspond to two separate situations for the same participant |
|
|
Term
Why use multiple baseline designs? |
|
Definition
B-> change is unethical Shows behavior accompanies manipulation of the treatment Replication of behavior change when implementing treatment |
|
|
Term
Differences from Group Designs |
|
Definition
Conducted with only one participant More flexible Can be modified or changed No standardization necessary Requires continuous assessment |
|
|
Term
|
Definition
Strengths Cause and effect relationships from one person Not in case studies or quasi experimental designs More generalizable to reality Applied research Extremely flexible Weaknesses Relationship only demonstrated for one person Less generalizable to other individuals Assessment procedures may influence behavior Lack of statistical controls |
|
|
Term
|
Definition
Evaluation of programs implemented to achieve some positive effect on a group of individuals Need Assessment-> Program Theory Assessment-> Process Evaluation -> Outcome Evaluation-> Efficiency Assessment |
|
|
Term
Quasi Experimental Developmental Designs |
|
Definition
Addresses the need to study the effect of an IV in settings in which the control cannot be established |
|
|
Term
Non- and Quasi-experimental Designs |
|
Definition
Fail to meet at least one requirement of a true experimental design (ambiguous cause and effect) Non experimental- Little or no attempt to minimize threats to internal validity Quasi experimental- Some attempt to minimize threats to internal validity -Both compare groups (levels of IV) that are not manipulated Between-subjects: defined by a pre-existing participant variable Within subjects: defined in terms of time |
|
|
Term
One-Group Posttest-Only Design |
|
Definition
A.K.A. “One- shot case study” Lacks a crucial element of a true experiment A control group We may have some implicit idea of how a control group would perform, but there is no way of confirming this |
|
|
Term
Nonequivalent Control Group Design |
|
Definition
V does not permit control over assignment to groups Schools with different programs, participants with different jobs, gender, warm vs. cold climates Random assignment or matching is not possible Groups are considered non equivalent |
|
|
Term
One-Group Pretest-Posttest Design |
|
Definition
Comparison of measures before the manipulation ( a pretest) and again afterward ( a posttest) Index of change is then computed to determine effect Threats to internal validity History, maturation, testing, instrument decay, regression toward the mean |
|
|
Term
Nonequivalent Control Group Pretest-Posttest Design |
|
Definition
Uses preexisting groups: both are measured twice, only one receives the treatment Reduces the threat of assignment bias Reduces the threat of time-related factors Quasi-experimental |
|
|
Term
Interrupted Time Series Design |
|
Definition
Examines the dependent variable over an extended period of time, both before and after the IV is implemented Interpretation problems: Possible regression to the mean |
|
|
Term
|
Definition
Improves interrupted time series design by finding an appropriate” control group” Involves finding a similar population that did not receive a particular manipulation Limited because this is not a true “control group” |
|
|
Term
|
Definition
Persons of different age groups measured at the same point in time Cohort: A group of individuals of the same age at the same point in history Advantages Relatively quick Relatively inexpensive Gives insight into normative developmental change Disadvantages Cohort Effects: Experiences that happen at a certain point in time can cause an age group at one point in history to differ from the same age group at a different point in history Does not provide information about individual differences |
|
|
Term
|
Definition
Some group (one cohort) is followed over time Advantages Limits unimportant variability Gives insight into the role of the individual in behavior Give insight into the cumulative effect of experiences over time Gives insight into normative developmental change Disadvantages Time consuming Expensive Selective Attrition: You may lose more individuals of a certain type Cannot assess cohort effects |
|
|
Term
|
Definition
Multiple cohorts are studied over time Combines the advantages of both cross sectional and longitudinal designs Gives insight into the role of the individual in behavior Genes express themselves more actively with time Gives insight into the cumulative effect of experience over time Experiences in childhood can influence behaviors later in life more dramatically than they influence immediate behaviors Gives insight into normative developmental change Get initial data quickly and cheaply Can analyze data to look for selective attrition |
|
|