Term
What are the 4 types of cognitive tests and define them |
|
Definition
Intelligence tests- tests of intellectual potential in a specific area
Aptitude Tests- tests of intellectual potential in a specific area
Achievement tests- test of prior knowledge |
|
|
Term
What are the 3 types of Psychological tests |
|
Definition
Cognitive tests, Personality Assessment techniques, and Vocational Assesments |
|
|
Term
What are the pros and cons of the open response question format |
|
Definition
Open response questions are more sensitive and are better at detecting individual differences, however they take longer to administer and scoring response can be hard plus these responses can lack interrater reliability |
|
|
Term
What are the pros and cons of the restricted response format |
|
Definition
these questions are more reliable, they take less time to administer, but they may not always accurately measure what you are trying to measure and they are not as sensitive. |
|
|
Term
What are the 2 methods of test construction and what are the pros and cons of each |
|
Definition
Theoretical approach- Items of a test are based on a theory so start immediately with a base of information for determining your items. Items tend to have face validity and may not actually be a good measure.
Empirical approach- test is created only after items have been tested and determined the have validity. These items will have more validity but it can be harder to initially form items due to a lack of informational basis. |
|
|
Term
What is the contemporary approach |
|
Definition
The combination of the theoretical and empirical approaches. Items are created based on a theory and then tested to weed out those which are useless. Must have items which have face validity and are proven to be valid and discriminate if necessary. |
|
|
Term
What are 2 questions to consider with self report measures |
|
Definition
1. Do people have the capacity to be honest. 2. Do they have the motivation to be honest |
|
|
Term
What are the 5 response set |
|
Definition
1. Response acquiescence- participants agree to the question no matter what
2. Response deviation- participants respond abnormally to a question designed to screen those with psychological issues
3. Extremity Bias- participants answer most questions on one end of the scale or the other
4. Moderacy bias- participants only respond with the middlemost values on a scale
5. Social Desireability- participants try to respond in the most favorable way possible |
|
|
Term
How do you reduce the effect of social desirability |
|
Definition
1. Use forced choice response format 2. write items which lack social desirability 3. Use scales to assess social desirability in participants ( like Marlow Crown) 4. Use a Validity scale to re-score participants scores according to their reported lvl of social desirability 5. Demand reduction- use anonymity to encourage true responses |
|
|
Term
What is Paulhus's 2 factor approach to social desirability |
|
Definition
1. People use self deceptive enhancement ( positive adjustment, self serving bias, overconfidence phenomenon) to convince themselves they are better than they actually are
2. People use impression management to make others think they are better than they are. |
|
|
Term
What did Costa and McCrae Find |
|
Definition
That social desirability does not affect the correlation between self reports and informant ratings, showing that it does not have an effect on people's personalities |
|
|
Term
What do you learn from positive and negative skew, how do you adjust for each |
|
Definition
From positive skew you learn more data and can distinguish the people who did well on your test, but know nothing about those who did average and poorly. You can fix this by making an easier test
With negative skew you learn more and can distinguish between those who did poorly on your test and know nothing about those that did average and well on your test. You can correct this by making an easier test |
|
|
Term
|
Definition
An measure of response frequency that tells you how often participants keyed the correct response on your test. Up + LP/ U+L |
|
|
Term
Item Discrimination Index |
|
Definition
Tells you how often someone in the upper third and of scores keyed the correct response for an item. UP- LP/ U
.2-.39 is ok, .4 is good, .5 is great |
|
|
Term
|
Definition
correlation between the scores on an individual item and total test scores. Good for both polytomous or dichotmous items. Tells you how well a single item measure what it is supposed to and it is a good measure of discrimination. |
|
|
Term
Corrected Item Correlation |
|
Definition
the recalculated total item correlation if a certain item is removed. Tells you the effect of that singular item |
|
|
Term
|
Definition
coefficient which determines the correlation between two dichotomous items |
|
|
Term
Point Biserial Correlation |
|
Definition
coefficient which correlates a dichotomous and continuous ( polytomous) variable |
|
|
Term
|
Definition
the extent to which a measure consistently measures what it is designed to measure. It equals the true varaince/ observed variance. Tells you how much of a score is attributed to error |
|
|
Term
|
Definition
measure of test retest reliability in personality research. A measure of temporal consistency- reliability over time ( hope for .8) |
|
|
Term
|
Definition
a measure of consistency across all the items of a measure ( Item homogeneity). Determines if the items are all measuring the same trait |
|
|
Term
How do you measure Item homogeneity |
|
Definition
1. Split half coefficient+ Spearman Brown Prophecy formula- splits a test into 2 tests, allows you to correlate a test with itself and still measure it like it was 2 full tests. Can be used for dichotomous or polytomous variables
2.Coefficient alpha- can be used to determine the average correlation of all possible combination of items, used only with polytomous items |
|
|
Term
How is reliability affected by test length? |
|
Definition
The more items there are in a test, the more reliable it becomes more thorough. But you also have to make sure that the test is reliable yet is not too time consuming |
|
|
Term
|
Definition
a psychological quality or characteristic which you believe affects personality, emotions, cognition, or behavior. Most are abstract and intangible |
|
|
Term
|
Definition
Validity is the extent to which your test or an item measures what it is supposed to measure. Validity is dependent upon reliability so that a test can be shown to be accurate and also consistent |
|
|
Term
|
Definition
The extent to which a test measures characteristics related to a construct you are trying to test for. The extent to which the items in a test represent what you are trying to measure |
|
|
Term
|
Definition
Concurrent validity- the extent to which scores on your test correlate with an expected real world behavior. The criterion for behavior is available right away
Predictive- The extent to which scores on your scale can predict future behavior which correlates with it. Criterion scores are available later.
Both types are predicting, what changes is when the cirterion scores are available. |
|
|
Term
|
Definition
The extent to which your scale measures what it is trying to measure. An ongoing process which is never complete. Assess construct validity using a pattern of relationships to show that scores on your test correlate with other behaviors and scores on other tests. |
|
|
Term
|
Definition
When your criterion become outdated and you discover better cirterion which more accurately describe/ measure what you are looking for. |
|
|
Term
What is the best way to determine construct validity |
|
Definition
a multitrait multimethod matrix |
|
|
Term
|
Definition
area of the MMM which shows the relationship between a singular method of measurement and multiple traits. Represents reliability of a method |
|
|
Term
|
Definition
the data showing the relationship between multiple methods of measurement and multiple traits. represents the validity shared between two methods. |
|
|
Term
Correlations in the Hetero and Monomethod blocks represent: |
|
Definition
both method effects and trait content, these can be viewed individually in a multitrait multimethod approach |
|
|
Term
2 influence on the magnitude of correlations in the matrices of an MMM |
|
Definition
reliability and range truncation |
|
|