Term
|
Definition
"Degree to which the elements of an assessment instrument are relevant of targeted construct for the particular assessment purpose." -Haynes |
|
|
Term
Content Validity Problems |
|
Definition
- Ommisions of relevant content
- Inclusion of irrelevant content
- Excessive over (or under) emphasis of some apsects of the domain
|
|
|
Term
General Steps when building Content Validity into NEW instruments |
|
Definition
- Identify construct (or behavior)
2. Identify purpose (function of your measurement)
3. identify population (who will use the measurement) |
|
|
Term
Validity: Classical Distinctions (3) |
|
Definition
|
|
Term
|
Definition
Does the measure appear to measure what it says it measures.
(No established formal proceedures, superficial examination usually by potential consumers) |
|
|
Term
|
Definition
- including appropriate content
- avoiding inappropirate content
(Getting a good Balance) |
|
|
Term
Forms of Construct Validity?
4 (+ other)
|
|
Definition
- Convergent Validity
- Descriminant Validity
- DescriminaTIVE Validity
- Factorial Validity
Other Construct Valididty
(Getting at the MEANING of the scores)
|
|
|
Term
|
Definition
Compares:
Scores on target measure w/ scores on measure of SAME construct.
(Ex: Compares scores on my depression measure compared to scores on another depression measure)
(a form of Construct Validity) |
|
|
Term
|
Definition
Web of relationships between Constructs
(a representation of the constructs, their observable manifestations, and the interrelationships among and between eachother. Cronbach and Meehl's view: Construct Valididty--> that in order to provide evidence that a measure has construct validity, a nomological network has to be developed for its measure.) |
|
|
Term
|
Definition
Compares:
Scores on target measure w/ scores on a measure of supposedly UNrelated construct(s).
(purpose is to discriminate between other constructs)
(a form of Construct Validity)
|
|
|
Term
|
Definition
Compares:
Two or more groups w/ scores from the target measure.
(ANOVA; significance between groups)
(a form of Construct Validity) |
|
|
Term
|
Definition
Item assessed for how they group together
(How they cluster in meaningful ways)
|
|
|
Term
|
Definition
Inverse-Related (not UNrelated)
(strong magnitude, but in the opposite direction)
|
|
|
Term
What is Criterion Related Validity? |
|
Definition
It addresses the relationship between our scores and some criterion we wish to approximate or predict
(scores can be used to predict a persons performance, behaviors, etc.)
(Criterion = Gold Standard) |
|
|
Term
2 aspects of Criterion Validity? |
|
Definition
- Concurrent Validity
- Predictive Validity
|
|
|
Term
|
Definition
Do scores on my measure predict scores on the criterion measure given at approximately the same point in time? |
|
|
Term
|
Definition
Do scores on my measure predict performance at a LATER time? |
|
|
Term
Positive Predictive Power |
|
Definition
a true positive (probability that a person who has a positive test result actually has the outcome) |
|
|
Term
|
Definition
an extension of classical test theory that uses ANOVA methods to evaluate the combine effects of multiple sources of error variance on tests scores simultaneously |
|
|
Term
|
Definition
Explains systematic influences on scores(differences among participants in terms of the construct) -consider multiple contributors -contributors = facets (eg; Raters (r), occasion/time of measurement (pre/post), group conditions, test items) |
|
|
Term
g-theory looks at multiple _____ simultaneously. |
|
Definition
facets -Participants (p) -Rater (r) -Time (t) |
|
|
Term
|
Definition
-to design a study that will have adequate generalizability -concerned to with the extent to which a SAMPLE of measures generalizes to a more extensive set of conditions |
|
|
Term
Why hasn't g-theory caught on? |
|
Definition
-BC ITS VERY INTENSE! (must cross all levels (stats get complicated) -No Rule of Thumb for interpretation of results -Results depend upon variability among people in data set -not enough measurement classes |
|
|
Term
Relationship between reliability and validity |
|
Definition
Classical measurement theory: Reliability of scores sets upper limit on validity coefficients (and other r's) involving that score = ATTENUATION |
|
|
Term
Correction for Attenuation |
|
Definition
Estimates what r would be if constructs were measured WITHOUT ANY ERROR Note: Use reliable coefficients from your OWN data |
|
|
Term
last but not least, "Content validity is ________." |
|
Definition
Conditional -it depends on a particular instrument, used for a particular purpose, used with a particular population |
|
|
Term
Construct Validity; all aspects address the ________ of the scores. |
|
Definition
Meaning! -Do our scores show expected relationships (with construct bx's) based on theory or past research? |
|
|
Term
4 aspects of Construct Validity |
|
Definition
-Convergent Validity -Discriminant Validity -Discriminative Validity -Other evidence for Construct validity |
|
|
Term
|
Definition
Do scores on my measure relate positively to scores on another measure of the same construct? |
|
|
Term
Discriminant Validity (Unrelated) |
|
Definition
Are scores on my measure UNrelated to scores on a measure assessing a theoretically UNrelated construct. |
|
|
Term
|
Definition
do scores on my measure differentiate between groups KNOWN or SUSPECTED to differ on the construct i am assessing? |
|
|
Term
4th Aspect of Construct Validity |
|
Definition
Other evidence for Construct validity -Factor Analysis (Factor Validity) -->do scores change as predicted when exposed to factors that should produce change (experimental manipulations, developmental changes) -Negative Correlations between target scores and scores assessing constructs expected to be INVERSELY RELATED to target construct |
|
|
Term
2 Aspects of Criterion-Related Validity |
|
Definition
-Negative Concurrent -Predictive Validity |
|
|
Term
Concurrent Validity (Aspect of Criterion-Related Validity) |
|
Definition
Relationship between the test score and the criterion when the rest of the scores are obtained at about the same time that the criterion measures are obtained. |
|
|
Term
Predictive Validity (Aspect of Criterion-Related Validity) |
|
Definition
Relationship between the test scores and the criterion measure obtained at a FUTURE time |
|
|
Term
Criterion? -What should it have evidence of? |
|
Definition
The standard to which a test or a test score is evaluated -Reliability -Relevance (the criterion should tell us something about what we are trying to measure) -Validity -Being "Uncontaminated" |
|
|
Term
What is Criterion contamination? |
|
Definition
When the criterion itself has been bases, at least in part, on the predictor of the measure. (this may indicate some knowledge of the predictor) |
|
|
Term
What is the outcome of Criterion Contamination? |
|
Definition
It may artificially inflate the correlation between the predictor and the criterion. |
|
|
Term
2 variations in Criterion (outcomes) |
|
Definition
-Dichotomous (disorder present/absent?, will person pass/fail?) -Continuous (how well will a person will do; not just pass fail, not just yes or no) |
|
|
Term
|
Definition
-scores may not work well for all decisions -different clinical utility based on group (ex: a test may work well in predicting one groups performance but not for another) |
|
|
Term
Increment Validity (an additional increment) |
|
Definition
"Degree to which an additional predictor explains something about the criterion measure that is not explained by predictors (data) already in use" |
|
|
Term
3 types of Incremental Validity in Psychological research? (Hunsley & Meyer) |
|
Definition
-Testing instruments -Test-informed clinical inferences -New measures |
|
|
Term
|
Definition
The probability of the outcome in the population under investigation; without using the measure...how often does this outcome happen |
|
|
Term
|
Definition
# of positions available / # of applicants for the positions |
|
|
Term
3 Rules of Base Rates (BR) |
|
Definition
-too low; a test is best used to rule OUT a condition, but not rule in. -too high; a test is best used to rule IN a condition, but not rule out -It is best at 50%; this is when tests work best :) |
|
|
Term
What happens when base rates are too low? |
|
Definition
Large portion of false positives (this is why we need to be aware of base rates) |
|
|
Term
|
Definition
Detection Rate -the percentage of true positives that test identities correctly (accurate identification of people with a disorder) |
|
|
Term
|
Definition
Percentage of true negatives that the test identifies correctly (accurate exclusion of people without a disorder) |
|
|
Term
Positive Predictive Power |
|
Definition
The probability that person who has a positive test result actually has the outcome (TRUE POSITIVE) |
|
|
Term
Negative Predictive Power |
|
Definition
the probability that a person who has a negative test result does not have the outcome. (A TRUE NEGATIVE) |
|
|