Term
|
Definition
factors cause scores to vary systematically (in a way that pushes scores in a certain direction); BIAS MORE SERIOUS because it harms objectivity |
|
|
Term
|
Definition
testing conditions, teacher’s expectations, & the scoring of the test do not consistently favor any group (chance)
|
|
|
Term
|
Definition
Observers recording what they expect participants will do rather than what the participants are actually doing |
|
|
Term
|
Definition
experimenters being biased when they administer the treatment |
|
|
Term
|
Definition
: participants changing their behavior to impress you or to help you |
|
|
Term
Social Desireability Bias |
|
Definition
the participant acting in a way that makes the participant look good |
|
|
Term
|
Definition
a hint discovered by the participant that the participant will follow surely as if the researcher has demanded them to |
|
|
Term
|
Definition
a hint discovered by the participant that the participant will follow surely as if the researcher has demanded them to |
|
|
Term
|
Definition
stable, consistent scores that are not strongly influenced by random error |
|
|
Term
|
Definition
stable, consistent scores that are not strongly influenced by random error |
|
|
Term
|
Definition
degree to which answers to each question correlate with the overall test score; degree to which the test agrees with itself |
|
|
Term
Interobserver Reliability |
|
Definition
percentage of times the raters agree; to obtain the interobserver reliability coefficient, researchers calculate a correlation coefficient between the different raters’ judgments of the same behaviors & then square that correlation |
|
|
Term
Relationship of reliability and validity, sample size, power, and effect size |
|
Definition
- Reliability = prerequisite to validity; however reliability does not guarantee validity (i.e. a reliable measure may not be valid because it is reliably and consistently measuring the wrong thing) - Easiest way to increase power of study is to add participants
|
|
|
Term
|
Definition
degree to which the measure is measuring the construct that it claims to measure |
|
|
Term
How do you establish construct validity in your study? |
|
Definition
1. Good content validity 2. Good internal consistency 3. Good convergent validity 4. Good discriminant validity |
|
|
Term
Face and Content Validity |
|
Definition
Face: weakest kind, "does the test LOOK like it's right?"
Content: identify construct that is being measured, a good one samples the whole area |
|
|