Term
What are the most important factors in determining what group-level design to use in a specific study? |
|
Definition
1) What the research question is 2) How much knowledge about the problem area is available. |
|
|
Term
What are the characteristics of "ideal experiments?" |
|
Definition
*Explanatory
*Controllling the time order of variables
*Manipulating the IV
*Establishing relationships btwn variables
*Controlling Rival Hypotheses
*Using a control group
*Randomly assigning to groups |
|
|
Term
What are the 3 types of research designs? |
|
Definition
1. Explanatory - when there has been little research done on the topic. (Least precise)
2. Descriptive - Finding out how much of something there is. (More precise)
3. Explanatory - "Does the IV cause change in DV?" (Most precise) |
|
|
Term
Explain: Controlling the time order of variables |
|
Definition
The IV must happen before the change in the DV. Otherwise, cannot prove that the IV caused the change. |
|
|
Term
Explain: Manipulating the IV |
|
Definition
There has to be some manipulation of the IV between the groups in one of three ways:
1) treatment/no treatment
2) significant amt. of treatment/smaller amt. of treatement
3) one type of treatment/different type of treatment |
|
|
Term
Explain: Establishing relationships between variables |
|
Definition
There needs to be a cause & effect relationship between the IV and the DV. (Which causes which?) Ex: Economy vs. Domestic Violence: Which causes which? |
|
|
Term
Explain: Controlling rival hypotheses |
|
Definition
1. Holding extraneous variables constant: Ex: If treating patients for depression w/counseling, but clients are also receiving medication from a psychiatrist. Being sure the medical intervention has been ongoing for quite some time before introducing the counseling is considered holding that IV constant so you would know any new change is likely to be caused by the new IV (counseling). |
|
|
Term
Explain: Controlling rival hypotheses (#2) |
|
Definition
2. Using correlated variation: When you have 2 things that are closely related, such as income and housing, you can look at just one in the research study because income and housing are correlated. |
|
|
Term
Explain: Controlling rival hypotheses (#3) |
|
Definition
3. Using analysis of covariance: used when the two groups are not as alike as we would like them to be on all important variables, or when inequivalencies are discovered during the course of the study. This is a statistical method used to compensate for those differences. Uses random sampling and random assignment, then the control group gets either no treatment or less treatment than the experimental group and the results can be generalized back to the population. |
|
|
Term
Explain: Using a control group |
|
Definition
At least one control group must be used in an ideal experiment in addition to the experimental group.
Has to be random assignment.
*Strengthens internal validity
*In an existing group, the internal validity is weaker |
|
|
Term
Explain: Randomly assigning to groups |
|
Definition
After the sample has been selected, individuals are randomly assigned to either an experimental group or a control group in such a way that the two groups are equivalent. AKA "randomization." One way to do this is through matched pairs. |
|
|
Term
|
Definition
*Subset of randomization
*Deliberate method of assigning people to groups
Ex: parenting skills training program for foster mothers is being evaluated. Women chosen for sample would be matched in pairs by skill level; the 2 most skilled would be matched, the the next 2, etc. Then one person from each matched pair would be assigned to the experimental group and the the other to the control group. The second group is reversed, and so on. |
|
|
Term
List threats to internal validity. |
|
Definition
*History
*Maturation
*Testing
*Instrumentation error
*Statistical regression
*Differential selection o research participants
*Mortality
*Reactive effects of research participants
*Interaction effects
*Relation between experimental and control groups
|
|
|
Term
|
Definition
An event that happens before or during an experiment that impacts the experiment. Often happens between the pre-test and post-test. Ex: While conducting a study on the effects of an educational program on racial tolerance, a terrorist attack such as 9-11 occurs. This may have an impact on racial tolerance. |
|
|
Term
|
Definition
People change. Changes can be physical or mental, and these changes can impact a study. |
|
|
Term
|
Definition
When the pre-test affects the post-test. Ex: Someone remembers the answers from the pre-test while taking the post-test, had time to think about them or how he should have answered, and responds accordingly. |
|
|
Term
Explain: Instrumentation error |
|
Definition
Questions of measurement reliability and validity. The instrument itself may be unreliable, or the administration of the instrument may cause a problem. The circumstances under which the instrument is used may affect the results (where, when, how, and by whom). |
|
|
Term
Explain: Statistical regression |
|
Definition
The tendency of extremely low and extremely high scores to regress or move toward the average score for everyone in the research study. |
|
|
Term
Explain: Differential selection of research participants |
|
Definition
Issues of existing (comparison) groups. Causes diminished internal validity. |
|
|
Term
|
Definition
When people drop out of studies before they are completed. More may drop out of one group than the other, causing the groups to be less equal. |
|
|
Term
Explain: Reactive effects of research participants |
|
Definition
Changes in behaviors or feelings of research participants may be caused by their reaction to the situation or the knowledge that they are participating in a research study. Ex: The Hawthorne Effect. |
|
|
Term
Explain: Interaction effects |
|
Definition
Interaction among the various threats to internal validity which can cause an effect of its own. Most common involves differential selection and maturation.
|
|
|
Term
Explain: Relations between experimental group and control group.
|
|
Definition
Effects of the use of experimental and control groups that receive different interventions. Include:
1) Diffusion of treatments: When members of the 2 groups talk about the study and one imitates the treatment the other is getting.
2) Compensatory equalization: When the person doing the study or administering the intervention to the experimental group feels sorry for the control group and attempts to compensate them.
3) Compensatory rivalry: Control group feels motivated to compete with experimental group.
4) Demoralization: Control group feels deprived and members give up and drop out. |
|
|
Term
What do we mean when we say "threat to internal validity?" |
|
Definition
When there may be alternative explanations for changes in the DV in an experiment other than the IV. Rival hypotheses must be controlled, and the higher the internal validity, the more they can be. |
|
|
Term
List threats to external validity.
|
|
Definition
*Pretest-treatment interaction
*Selection-treatment interaction
*Specificity of variables
*Reactive effects
*Multiple-treatment interference
*Researcher bias |
|
|
Term
What is external validity? |
|
Definition
The degree to which the results of a specific research study are generalizable: 1) to another population, 2) to another setting, 3) to another time.
Conern: Is the sample different from the general population? |
|
|
Term
Explain: Pretest-treatment interaction |
|
Definition
Similar to the testing threat to internal validity. The nature of a pretest can alter the way research participants respond to the treatment. Participants may respond negatively, for example, because they do not want to be "trained" to respond the way they think the researcher wants them to. This renders them different from the general population and no longer a good sample for a study. |
|
|
Term
Explain: Selection-treatment interaction |
|
Definition
Common when random selection is not possible. Ex: 50 agencies refuse to participate in the study, but the 51st agrees. This is because it is different from the general population its thinking, motivation, etc., and therefore external validity is compromised. |
|
|
Term
Explain: Specificity of variables |
|
Definition
Time, place, and group limit the validity of the results. A study of a certain population at a specific time in a specific way may not be generalizable to others at different times in different settings. |
|
|
Term
Explain: Reactive effects
|
|
Definition
The knowledge that they are being researched makes them different from the general population. |
|
|
Term
Explain: Multiple-treatment interferences |
|
Definition
If more than one treatment is introduced, we cannot be sure which one caused the change on the DV. |
|
|
Term
|
Definition
When the researcher is biased and, as a result, treats the participants differently, impacting the test results. |
|
|
Term
What are the characteristics of exploratory research designs? |
|
Definition
*Lowest level
*No pretest
*Only measure DV @ posttest (no time order)
*No manipulation of IV (X)
*No random sampling
*No random assignement (no comparison group) |
|
|
Term
List Exploratory research designs |
|
Definition
1) One-group posttest-only design
2) Cross-sectional survey design
3) Multigroup posttest-only design
4) Longitudinal one-group posstest only design
5) Longitudinal survey design |
|
|
Term
What are the characteristics of descriptive research designs? |
|
Definition
*Midpoint on the knowledge continuum
*Have some, but not all, of the requirements for an ideal experiment
*Usually require specification of the time order of variables, manipulation of the IV, & establishment of the relationship btwn IV & DV.
*May control for rival hypotheses
*May use 2nd group for comparison (NOT control group)
*The requirement usually lacking is the random assignment of participants to 2 or more groups. |
|
|
Term
List examples of descriptive research designs. |
|
Definition
1) Randomized one-group posttest-only design
2) Randomized cross-sectional survey design
3) One-group pretest-posttest design
4) Comparison group posttest-only design
5) Comparison group pretest-posttest design
6) Interrupted time-series design |
|
|
Term
What are the characteristics of explanatory research designs? |
|
Definition
*Highest level on knowledge continuum
*Most rigid requirements
*Most able to produce results that can be generalized to other people and situations
*Most able to provide valid and reliable research results
*Purpose is to establish a causal connection between the IV and the DV |
|
|
Term
List explanatory research designs. |
|
Definition
1) Classical experimental design
2) Randomized posstest-only control group design |
|
|
Term
|
Definition
An unbiased selection process conducted so that all members of a population have an equal chance of being selected to participate in a research study. |
|
|
Term
What is random assignment?
|
|
Definition
The process of assigning individuals to experimental or control groups so that the groups are equivalent; also referred to as randomization. |
|
|
Term
Symbols used in group designs: |
|
Definition
Rs = Random sfrom a population
Ra = Random assignment to a group
O1 = First measurement off the dependent variable
X = Independent variable, or intervention
O2 = Second measurement of the DV
|
|
|
Term
Explain: One-group posttest-only design
|
|
Definition
*one-shot case study
*cross-sectional case study design
*provides one single measure (O1) of what happens when one group of people is subjected to one treatment/experience (X).
*No random selection - cannot be generalized. |
|
|
Term
Describe: Cross-sectional survey design
|
|
Definition
*survey only once a cross-section of some particular population. Results constitute a single measurement.
*No random sampling, no IV, no DV. |
|
|
Term
Describe: Multigroup posttest-only design
|
|
Definition
*Elaboration of the one-group posttest-only design
*More than one group used
*No random sample, no generalization outside sample group |
|
|
Term
Describe: Longitudinal one-group posstest only design
|
|
Definition
*Exactly like the one-group posttest-only design except provides multiple measurements of the DV.
*AKA longitudinal case study design
*No random sample, no generalization.
|
|
|
Term
Describe: Longitudinal survey design |
|
Definition
*Unlike cross-sectional surveys, where the variable of interest (usually a DV) is measured at one point in time, longitudinal surveys provide data a various points in time so that changes can be monitored over time. There are three different types: (1) trend studies, (2) cohort studies, and (3) panel studies. |
|
|
Term
Describe: Randomized one-group posttest-only design
|
|
Definition
*members of a group are randomly selected for it from some sort of population.
*otherwise identical to exploratory one-group posttest-only design. |
|
|
Term
Describe: Randomized cross-sectional survey design
|
|
Definition
*Obtains data only once from a sample drawn from a particular population
*Random sample, generalizable |
|
|
Term
Describe: One-group pretest-posttest design
|
|
Definition
*AKA before-after design
*Includes a pretest of DV used to compare with posttest results
O1
X
O2 |
|
|
Term
Describe: Comparison group posttest-only design
|
|
Definition
*Uses comparison group
*No random sample
*Intervention, then posttest of both groups
Experimental Group:
X
O1
Comparison Group:
X
O1 |
|
|
Term
Describe: Comparison group pretest-posttest design
|
|
Definition
*Elaborates on the one-group pretest-posttest design by adding a comparison group.
O1
x
O2 |
|
|
Term
Describe: Interrupted time-series design |
|
Definition
Series of pretests and posttests conducted on a group of research participants over time, both before and after the IV is introduced.
|
|
|
Term
Describe: Classical Experimental Design |
|
Definition
*Basis forall experimental designs.
*Involves an experimental group & control group, both created by random assignment (and, if possible, by random selection from population).
1. Both groups take a pretest (O1) at same time
2. IV is given to experimental group
3. Both groups take posttest |
|
|
Term
Describe: Randomized posttest-only control group design |
|
Definition
*Identical to descriptive comparison group posttest-only design except that the research participants are randomly assigned to two groups. Therefore, this design has a control group rather than a comparison group.
*Usually involves only 2 groups, one experimental and one control.
*No pretests.
R - Rs & Ra
X - IV
O1 - 1st and only measurement of DV |
|
|
Term
Describe: Trend Studies (Longitudinal Survey Design) |
|
Definition
Samples different groups of people at different points in time from the same population.
O1- Sample 1, Year 1
O2 - Sample 2, Year 2
O3 - Sample 3, Year 3 |
|
|
Term
Describe: Cohort Studies (Longitudinal Survey Design) |
|
Definition
Subjects who presently have a certain condition and/or receive a particular treatment are followed over time and compared to another group who are not affected by the condition. (Same group, but not same people.)
|
|
|
Term
Describe: Panel Studies (Longitudinal Survey Design) |
|
Definition
The same individuals are followed over a period of time. |
|
|