Term
|
Definition
| Occurs when the information we collect consistently reflects a false picture of the concept we seek to measure, either because of the way we collect the data or the dynamics of those who are providing the data. |
|
|
Term
|
Definition
| That quality of a measurement device that tends to result in a misrepresentation of what is being measured in a particular direction. |
|
|
Term
|
Definition
| error that has no consistent pattern of effects. Random errors do not bias our measures; they make them inconsistent from one measurement to the next. |
|
|
Term
| Alternative Forms of Measurements |
|
Definition
| Written Self Reports, Interviews, Direct Behavioral Observation, Examining Available Records |
|
|
Term
|
Definition
| deals with the systematic error by using several different research methods to collect the same information |
|
|
Term
|
Definition
| is a matter of whether a particular technique, applied repeatedly to the same object, would yield the same results each time. The more reliable the measure, the less random error in it. |
|
|
Term
| interobserver and interrater reliability |
|
Definition
| the term for the degree of agreement or consistency between or among observers or raters |
|
|
Term
|
Definition
| The term for assessing a measures's stability over time |
|
|
Term
| internal consistency reliability |
|
Definition
| assumes that the instrument contains multiple items, each of which is scored and combined with the scores of the other items to produce an overall score |
|
|
Term
|
Definition
| refers to the extent to which an empirical measure adequately reflects the real meaning of the concept under consideration |
|
|
Term
|
Definition
| determined by subjective assessments;having face validity does not mean that a measure really measures really measures what the researcher intends to measure, only that it appears to |
|
|
Term
|
Definition
| refers to the degree to which a measure covers the range of meanings included within the concept |
|
|
Term
|
Definition
| the term refers to the degree to which a measure covers the range of meanings included within the concept (includes elements of face validity) |
|
|
Term
| criterion-related validity |
|
Definition
| the degree to which a measure relates with some external criterion. for example, the validity of the college board exam is shown in its ability to predict the college success of students |
|
|
Term
|
Definition
| a form of criterion-related validity that pertains to the degree to which an instrument accurately differentiates between groups that are known to differ in respect to the variable being measured |
|
|
Term
|
Definition
| the degree to which a measure relates to other variables as expected within a system of theoretical relationships and as reflected by the degree of its convergent validity and discriminant validity |
|
|
Term
|
Definition
| a measure has this when its results correspond to the results of other methods of measuring the same construct |
|
|
Term
|
Definition
| a measure has this when its results do not correspond as highly with measures of other constructs as they do with other measures of the same constructs |
|
|
Term
|
Definition
| a conclusion that can be drawn in light of our research design and other findings |
|
|
Term
|
Definition
| derived from a research design and findings that logically imply that the independent variable really has causal impact on the dependant variable |
|
|
Term
|
Definition
| refers to all the decisions made in planning and conducting research |
|
|
Term
| What are the three requirements in a causal relationship? |
|
Definition
1. cause precedes the effect in time 2. two variables be empirically correlated with one another 3. observed empirical correlation between two variables cannot be explained away as a result of some 3rd variable that causes the two under consideration |
|
|
Term
|
Definition
| refers to the confidence we have that the results of a study accurately depicts whether one variable is or is not a cause of another |
|
|
Term
|
Definition
| refers to the possibility that investigators might erroneously conclude that differences in outcomes were caused by the evaluated intervention when in fact something else really caused the differences |
|
|
Term
|
Definition
| Pilot study designs for evaluating the effectiveness of interventions;they do not control for threats to internal validity |
|
|
Term
|
Definition
| Pre-experimental design with low internal validity, that simply measures a single group of subjects on a dependant variable at one point in time after they have been exposed to a stimulus X O |
|
|
Term
| One-Group Pretest-Posttest Design |
|
Definition
| a pre-experimental design with low internal validity, that assesses a dependant variable before and after a stimulus is introduced but does not attempt to control for alternate explanations of any changes in scores that are observed O1 X O2 |
|
|
Term
Posttest-only design with nonequivalent groups Static group comparison design |
|
Definition
a pre-experimental design that involves two groups that may not be comparable, in which the dependant variable is assessed after the independent variable is introduced for one of the groups X O O |
|
|
Term
|
Definition
| a research method that attempts to provide maximum control for threats to internal validity by 1. randomly assigning individuals to experimental and control groups 2. introducing the the independent variable to the experimental group with withholding it from the control group 3. comparing the amount of experimental and control group change on the dependent variable |
|
|
Term
|
Definition
| in experiments, a group of participants who receive intervention being evaluated and who resemble the control group in all other aspects. The comparisons of the experimental and control group at the end of the experiment points to the effects of the tested intervention |
|
|
Term
|
Definition
| in experimentation a group of participants who do not receive the intervention being evaluated and who should resemble the experimental group. |
|
|
Term
| pretest-posttest control groups |
|
Definition
the classic experimental design in which subjects are assigned randomly to an experimental group the receives an intervention being evaluated and to a control group that doesn't. each group is tested on the dependent variable before and after the experimental group receives intervention. R O1 X O2 |
|
|
Term
| Posttest-only control group design |
|
Definition
a variation of the classical experimental design that avoids the possible testng effects associated with pretesting by testing only after the experimental group receives the intervention. R X O |
|
|
Term
| Solomon four group design |
|
Definition
experimental design that assesses testing effects by randomly assigning subjects to four groups, introducing the intervention being evaluated to two of them, conducting both pretesting and posttesting on one group that receives the intervention and one group that does not, and conducting posttesting only on the other two groups R O1 X O2 |
|
|
Term
| Alternative treatment design with pretest |
|
Definition
an experiment that compares the effectiveness of two alternative treatments. R O1 Xa O2 |
|
|
Term
|
Definition
| Experiments designed to test not only whether an intervention is effective but also which components of the intervention may or may not be necessary to achieve its effects. |
|
|
Term
|
Definition
| a technique for assigning experimental participants to experimental groups and control groups at random. |
|
|
Term
|
Definition
| pairs of participants are matched on the basis of their similarities on one or more variables and one member of the pair is assigned to the control group and the other to the experimental group |
|
|
Term
|
Definition
| Changes in the dependant variable that are caused by the power of suggestion among participants in an experimental group that they are receiving something special that is expected to help them. these changes would not occur if they received the experimental intervention without the awareness |
|
|
Term
|
Definition
| service provided or service recipients are influenced unexpectedly in ways that tend to diminish the planned differences in the way a tested intervention is implemented among the groups being compared |
|
|
Term
| Compensatory equalization |
|
Definition
| a threat to the validity of an evaluation of an intervention's effectiveness that occurs when practitioners in the comparison routine-treatment condition compensate for the differences in in treatment between their group and the experimental group by providing enhanced services that go beyond the routine treatment regimen for their clients, thus potentially blurring the true effects of the tested intervention |
|
|
Term
| quasi-experimental design |
|
Definition
| design that attempts to control for threats to internal validity and thus permits causal inferences but is distinguished from true experiments primarily by the lack of random assignment of subjects |
|
|
Term
| nonequivalent comparison group design |
|
Definition
| can be used when we are unable to randomly assign participants to groups but can find an existing group and thus can be compared to it. |
|
|
Term
|
Definition
| means administering the same pretest at different time points before intervention begins. It is a way to strengthen internal validity in nonequivalent comparison group designs. |
|
|
Term
|
Definition
| involves administering the treatment to the comparison group after the first posttest. If we replicate in that group- in the second posttest-the improvement made by the experimental group in the first posttest then we reduce doubt of selection bias ( used in nonequivalent comparison group design) |
|
|
Term
|
Definition
| quasi-experimental designs that go beyond the use of multiple pretest by additionally emphasizing the use of multiple posttest |
|
|
Term
| simple interrupted time series design |
|
Definition
| a quasi-experimental design in which no comparison group is utlized and that attempts to develop causal inference based on a comparison of trends over multiple measurements before and after an intervention is introduced. |
|
|
Term
| multiple time series designs |
|
Definition
| stronger forms of time-series analysis than simple time-series designs( greater internal validity) b/c they add time series analysis to the nonequivalent comparison groups design |
|
|
Term
| interrupted time series with a nonequivalent comparison group |
|
Definition
| in this design both an experimental group and a nonequivalent comparison group (neither assigned randomly) are measured at multiple points in time before and after an intervention is introduced to the experimental group |
|
|
Term
|
Definition
| a study based on observation that represent a single point in time. may have exploratory, descriptive or explanatory purposes. |
|
|
Term
|
Definition
| a design for evaluating interventions that compares groups of cases that have had contrasting outcomes and then collects retrospective data about past differences that might explain the difference in outcomes. It relies on multivariate statistical procedures. |
|
|
Term
|
Definition
| a common limitation in case control designs that occurs when a person's current recollections of the quality and value of past experiences are tainted by knowing that things didn't work out for them later in life. |
|
|
Term
|
Definition
| refers to the degree to which the intervention actually delivered to clients was delivered as intended |
|
|
Term
| contamination of the control condition |
|
Definition
| the control condition can be contaminated if the control group and the experimental group members interact. |
|
|