| Term 
 | Definition 
 
        | estimate how much variance is due to error; specifies the probably that our mean differences are statistically significant (due to IV) |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | means NOT significantly different (therefore X does NOT have an effect on Y) |  | 
        |  | 
        
        | Term 
 
        | Alternative hypothesis (H1) |  | Definition 
 
        | means are significantly different (therefore X has an effect on Y); most desired since an effect is shown |  | 
        |  | 
        
        | Term 
 
        | If the means are different enough, we ... |  | Definition 
 
        | reject the null hypothesis |  | 
        |  | 
        
        | Term 
 
        | If the means are not different enough, we... |  | Definition 
 
        | fail to reject the null hypothesis |  | 
        |  | 
        
        | Term 
 
        | If the person is really innocent but the jury presumes/finds them guilty, what kind of error is this? |  | Definition 
 | 
        |  | 
        
        | Term 
 
        | If the person is actually guilty but is presumed innocent, then what type of error has occurred? |  | Definition 
 | 
        |  | 
        
        | Term 
 
        | Type I error falsely _____ the null when the null is actually correct. |  | Definition 
 | 
        |  | 
        
        | Term 
 
        | Which is worse, a Type I or Type II error? |  | Definition 
 
        | Type I allows you to say something works when it really doesn't work.  Expensive, possibly harmful to people. |  | 
        |  | 
        
        | Term 
 
        | Type I errors conclude that means are ________ when they are not. |  | Definition 
 | 
        |  | 
        
        | Term 
 | Definition 
 
        | Mistakenly fail to reject the null; state that IV has no effect on DV when it DOES) |  | 
        |  | 
        
        | Term 
 
        | If someone is found not guilty but really is guilty then what type of error is present? |  | Definition 
 | 
        |  | 
        
        | Term 
 
        | Type I is the worst type of error because we don't want to put an innocent person in jail or let a criminal keep walking the streets, so what do we do? |  | Definition 
 
        | control for Type I error tightly (but may have an increased chance of a Type II error) |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | probability of making a Type I error (finding an effect when there is none) |  | 
        |  | 
        
        | Term 
 
        | If our alpha level is .05 then we have __% confidence that we will NOT have a Type I error. |  | Definition 
 
        | 95 (only 5% chance that we are wrong and 5% chance we will make a Type 1 error) |  | 
        |  | 
        
        | Term 
 | Definition 
 | 
        |  | 
        
        | Term 
 
        | Beta is usually set at 0.2 which means that...? |  | Definition 
 
        | There's 20% change of mistakenly FAILING TO REJECT the null. |  | 
        |  | 
        
        | Term 
 
        | If you had to choose a type of error to make we'd choose which type?  Type I or Type II? |  | Definition 
 | 
        |  | 
        
        | Term 
 
        | Type II error rate is typically determined by your set ____________. |  | Definition 
 | 
        |  | 
        
        | Term 
 | Definition 
 
        | probability that a study will correctly reject null when null is false |  | 
        |  | 
        
        | Term 
 
        | Power is influenced by what 3 things? |  | Definition 
 
        | Effect size, alpha level, # of participants |  | 
        |  | 
        
        | Term 
 | Definition 
 | 
        |  | 
        
        | Term 
 
        | Power is the "________" of beta. |  | Definition 
 | 
        |  | 
        
        | Term 
 
        | A test saying your pregenant when you're actually NOT, what type of error is this? |  | Definition 
 | 
        |  | 
        
        | Term 
 
        | If we say M&M's DON'T have an effect on memory when they actually do then what type of error is this? |  | Definition 
 | 
        |  | 
        
        | Term 
 
        | Alpha goes with _______ error and beta goes with ______ error. |  | Definition 
 | 
        |  | 
        
        | Term 
 
        | Power ='s __?___ if beta is 0.3 |  | Definition 
 
        | 0.7 (formula: Power = 1 - Beta; 1-0.3 = 0.7) |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | used to estimate the number of participants we need in our study |  | 
        |  | 
        
        | Term 
 
        | Researchers typically aim for .80 ________. |  | Definition 
 | 
        |  | 
        
        | Term 
 
        | What does "underpowered" mean? |  | Definition 
 
        | Not enough participants (power) in the study to see the effect on IV/DV, etc. |  | 
        |  | 
        
        | Term 
 
        | Scientific community really tries to NOT make Type I errors!! |  | Definition 
 | 
        |  | 
        
        | Term 
 | Definition 
 
        | How much does the IV effect the DV? |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | 0.0 - 1.0 (no relationship - perfect relationship) |  | 
        |  | 
        
        | Term 
 
        | Is an effect size of .15 a strong representation of the IV on the DV? |  | Definition 
 | 
        |  | 
        
        | Term 
 | Definition 
 
        | two means; decide if there's a statistical difference between Group A and Group B |  | 
        |  | 
        
        | Term 
 
        | T/F The probability of making a Type I error is called the alpha level? |  | Definition 
 | 
        |  | 
        
        | Term 
 
        | For each test we run, the chance of making an error does what? |  | Definition 
 | 
        |  | 
        
        | Term 
 
        | How can you conduct multiple comparisions without much risk of a Type 1 error? |  | Definition 
 
        | Bonferonni correction, ANOVA's |  | 
        |  | 
        
        | Term 
 
        | T/F A Bonferonni correction is a yoga move developed in Italy. |  | Definition 
 
        | False, it's making a more strict alpha. |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | Divide alpha by number of t-tests: 0.05/10 = 0.005 |  | 
        |  | 
        
        | Term 
 
        | ANOVA (< or >) a Bonferonni adjustment |  | Definition 
 
        | > ; ANOVA's are best remedy for avoiding a Type I error when in need of multiple tests |  | 
        |  | 
        
        | Term 
 
        | Analysis of Variance (ANOVA) |  | Definition 
 
        | A stat procedure used to analyze data from designs that involve more than two conditions. Just, is there a difference? |  | 
        |  | 
        
        | Term 
 | Definition 
 | 
        |  | 
        
        | Term 
 
        | How do you know which groups actually have differences from each other? |  | Definition 
 
        | Post hoc tests (conducted only if F test is significant) |  | 
        |  | 
        
        | Term 
 
        | What does these people have in common:  Tuki  Scheffe's,  Newman-Kuels |  | Definition 
 
        | All have their own post hoc (follow up) test |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | allows researchers to test a composite of several dependent variables |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | can test related DV's at one time; controls for Type I error |  | 
        |  | 
        
        | Term 
 
        | An experiment becomes quasi if… |  | Definition 
 
        | researchers may not have control over randomly assigning Ps; may be unwilling to manipulate IV of interest |  | 
        |  | 
        
        | Term 
 
        | Why wouldn't a researcher be able to manipulate their IV? |  | Definition 
 
        | may be unethical; variable may be subjective |  | 
        |  | 
        
        | Term 
 
        | Common threats to internal validity |  | Definition 
 
        | maturation, regression to mean, pre-test sensitization, selection bias, local history, history, |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | normal changes that occur over time; may be b/c of IV |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | Ps selected because of extreme scores; scores may change between pre and post test but not really have an effect on IV |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | simply taking pretest changes Ps reactions to posttest |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | researcher thinks that IV caused changes in DV but groups were different before intro of IV (therefore this sucks) |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | some other event occurred in one group but not the other, and this event, rather than the independent variable, caused the differences between groups. |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | something other than IV occurred between pre and post test; the event actually caused the change not IV (9/11, for example) |  | 
        |  | 
        
        | Term 
 
        | Quasi experimentals designs do not generally have the same ______ _______ as experiments. |  | Definition 
 
        | internal validity; quasi's weakness |  | 
        |  | 
        
        | Term 
 
        | Single case research vs. group research |  | Definition 
 
        | unit of analysis based on the individual vs. the group |  | 
        |  | 
        
        | Term 
 
        | How can you present results from single-case research? |  | Definition 
 | 
        |  | 
        
        | Term 
 
        | Criticisms of single case designs |  | Definition 
 
        | generalizable; not necessarrily valid, easy to generalize with rats rather than people, ethical issues involving taking treatment away |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | detailed study of a single individual, group, or event |  | 
        |  | 
        
        | Term 
 
        | Why do case study research? |  | Definition 
 
        | source of insight/ideas, describe rare phenomena, psychobiography, illustrative anecdotes |  | 
        |  | 
        
        | Term 
 
        | Limitations of case study research |  | Definition 
 
        | failure to control extra variables, observer biases |  | 
        |  | 
        
        | Term 
 
        | Simple interrupted time series design |  | Definition 
 
        | O1 O2 O3 O4 X O5 O6 O7 O8 |  | 
        |  | 
        
        | Term 
 
        | Interrupted Time Series Design with Multiple Replications |  | Definition 
 
        | O1 O2 O3 X O4 O5 O6 -X O7 O8 O9 |  | 
        |  | 
        
        | Term 
 
        | (From the study guide) Which of the following use one of the benefits of using the case study method in behavioral research?  A) To describe rare phenomena B) Psychobiography
 C) Illustrative anecdotes
 D) None of the above
 E) All of the above
 |  | Definition 
 
        | E, all of the above.  See slides for Chapter 12 and 13 for clarity. |  | 
        |  | 
        
        | Term 
 
        | Simple interrupted time series design |  | Definition 
 | 
        |  | 
        
        | Term 
 
        | Non-equivalent control group design |  | Definition 
 | 
        |  | 
        
        | Term 
 
        | (From the study guide) T/F ANOVAs are based on a statistic called the A-test. |  | Definition 
 
        | False, ANOVAs are based on F-tests. |  | 
        |  | 
        
        | Term 
 
        | (From the study guide) T/F The probability of making a type 1 error is called Alpha. |  | Definition 
 | 
        |  | 
        
        | Term 
 
        | (From the study guide) What is the difference between an experimental and quasi-experimental design? |  | Definition 
 
        | Experimental - can control extraneous variables and manipulate all IV's Quasi experimental - can't control for extraneous variables BUT can't manipulate IV's b/c they're no experimental
 |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | as research act dutifully, never any harm/deception justified (universal moral code) |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | end result (not what is right), ends justify means; consequences important but there is weighing of costs/benefits |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | Good/right for situation, takes into account culture/time; what's right in the moment? |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | enhance behavioral knowledge; risks/costs may be high but good contribution to research |  | 
        |  | 
        
        | Term 
 
        | improvement of research or assessment techniques |  | Definition 
 
        | point is to improve research; provides for more reliable/valid research |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | practical benefits that improve welfare of both humans and animals |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | educational benefits and career advancement |  | 
        |  | 
        
        | Term 
 
        | benefits for research participants |  | Definition 
 
        | clinical implications, educational, enjoyable |  | 
        |  | 
        
        | Term 
 
        | All human research as to be approved by panels in the IRB which stands for what? |  | Definition 
 
        | Institutional Review Board |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | voluntary participant, sign a form that entails general study purpose/procedures, potential risks/benefits, compensation, etc. |  | 
        |  | 
        
        | Term 
 
        | Problems with informed consent |  | Definition 
 
        | Compromises validity, some people can't give informed consent (mentally disabled, children, etc.), waste of time/silly |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | Exempt, expedited, full board categories |  | 
        |  | 
        
        | Term 
 
        | Negatives of coercion to participate |  | Definition 
 
        | implied pressure, more at risk or unstable people may do study that could be unhealthy for them, taken advantage of |  | 
        |  | 
        
        | Term 
 
        | Benefits of coercion to participate |  | Definition 
 
        | compensation, free drugs/therapy, school credit |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | no greater probability/severity of getting hurt, etc. to participate in a study than you would have in your daily life |  | 
        |  | 
        
        | Term 
 
        | You don't need informed consent if... |  | Definition 
 
        | minimal risk is involved, rights of person aren't being violated, research couldn't be done if consent was required |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | confederates, false feedback, presenting two related studies as unrelated, giving false information |  | 
        |  | 
        
        | Term 
 
        | Bottom line about deception... |  | Definition 
 
        | Don't deceive Ps unless you have; especially not if it could effect their decision on whether to participate in the study or not |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | reveals true nature of study, removes stress/negative stuff, get Ps reactions, maintain good standing with P |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | people outside the study will not see private information |  | 
        |  | 
        
        | Term 
 
        | Committee that fights for animals rights... |  | Definition 
 
        | Institutional Animal Care and Use Committee  (IACUC) |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | reduce number of animals used, replace animals models with others if possible, redefine procedures to ensure best comfort/care |  | 
        |  | 
        
        | Term 
 
        | Pro's of using animals in research |  | Definition 
 
        | answers "unanswerable" questions, inexpensive, good predictor of human behavior, rats similar to humans in many ways |  | 
        |  | 
        
        | Term 
 
        | Cons of using animals in research |  | Definition 
 
        | not using population of greatest interest, can't model certain phenomena, animals can't talk or be exposed to stress, traffic, school, etc. |  | 
        |  |