Term
What are evidence-based practices? |
|
Definition
Practices based on controlled scientific research - Clearly defined independent variables (the intervention + moderators/mediators) - Clearly defined dependent variables (the outcomes) - Procedures for monitoring dependent variable and independent variable (tx integrity) - Design that controls for threats to internal validity – tx caused the change - Replications to provide evidence for external validity – generalizability of findings Evidence exists on a continuum - Strong evidence/support - Promising evidence/support - Marginal evidence/support - No evidence/support |
|
|
Term
|
Definition
Qualitative (sex, race, SES) or quantitative (achievement scores, depression scores) variable that affects direction or strength of relationships between an IV and DV (3rd variable) – find in meta-analyses often, a correlation Example: Cognitive behavior therapy reduces depression more in women than in men |
|
|
Term
|
Definition
Explains or accounts for the relationship between an IV and DV. Specifies how, why, or through what process an IV affects a DV, causal Example: Coercive parenting behavior leads to the development of antisocial behavior patterns in boys via negative reinforcement neg reinf is the mediator, explanation of how parenting bx leads to antisocial bx |
|
|
Term
Moderated and mediated effects are studied via |
|
Definition
Multiple regression
Structural equation modeling |
|
|
Term
National Autism Center's National Standards Report Requirements for Group Design & Single Case Design |
|
Definition
Group = 2 or more groups, random assignment, n > 10 per group. Procedures for missing data = multiple imputation procedures
Single Case = minimum 3 comparisons (3 pts in time or phase repetitions) - Min 5 data pts per condition N = 3 minimum No missing data possible |
|
|
Term
Measurement of DV (National Autism Center) |
|
Definition
Test/Scale - type of measurement (rating/checklist) - should be standardized - have psychometric data - evaluators blind and independent
Systematic Direct Observations - type of measurement (event/interval/permanent product) - Dimension of behavior (frequency/intensity/duration/latency) - IOA > 90%, kappa > .75 % of sessions IOA collected and % session data collected |
|
|
Term
What does kappa control for that makes it better than IOA? |
|
Definition
|
|
Term
Measurement of IV (National Autism Standards) |
|
Definition
> 80% treatment integrity/fidelity, IOA>80% |
|
|
Term
Generalization Data (National Autism Standards) |
|
Definition
objective data
maintenance data (minimum of 2), across settings, stimuli and persons |
|
|
Term
Research Approaches & Scientific Evidence - Experimental Design - Quasi-experimental design - Single Case experimental design - Regressions discontinuity design |
|
Definition
Exp - Randomized clinical trial (RCT - strongest evidence)
Quasi - nonrandomized/intact groups, matching/ANCOVA (ways to control up front or statistically)
Regression - on next card |
|
|
Term
Regression Discontinuity Design |
|
Definition
Participants assigned to groups based on cut score (above or below)
Assignment variable may be pretest score on outcome variable
Can be used to compare 2 or more tx
Powerful design - good protection against threats to internal validity
Compares slopes of regression lines of exp and control groups |
|
|
Term
Single Case Experimental Design Facts |
|
Definition
Individual case is unit of intervention & unit of data analysis Within design, the case provides its own control for comparison purposes Outcome variable is measured repeatedly within & across conditions, gives more frequent measurement Experimental control involves replication of intervention effects – allows for you not to need as many participants Introduction & withdrawal (reversal) of independent variable (ABAB design), demonstrating experimental control Iterative manipulation of independent variable across different observational phases (alternating treatments or multielement designs) Staggered introduction of independent variable across different points in time (multiple baseline design) To meet internal validity standards, study must include at least 3 attempts to demonstrate effect at 3 different points in time with 3 different phase repetitions |
|
|
Term
Causal Questions that Single Case Designs can answer |
|
Definition
Is there a causal relation b/t the introduction of an IV & change in a DV? - Does intervention B reduce a px bx for this case or these cases?
What is the effect of altering a component of multi-component IV on a DV? - Does adding intervention C to intervention B further reduce a px bx? Looking at interaction effects
What are relative effects of 2 or more IVs on a DV? - Is intervention B or intervention C more effective in reducing px bx? <-- Have issues of sequencing effects here |
|
|
Term
Quantitative Features of Single Case Designs - Level - Trend - Variability |
|
Definition
Level: Mean score for the data within a phase Trend: Slope of best-fitting straight line for the data within a phase Variability: Fluctuation of data around the level within a phase |
|
|
Term
Quantitative features of Single case designs |
|
Definition
Immediacy of effect Overlap – PND (percentage of non-overlapping data points), HLM can be a better measure Consistency of data patterns across similar phases Intraocular test of significance (Do the eyes have it?) |
|
|
Term
Criteria for SCDs that meet evidence standards (from What Works Clearinghouse) |
|
Definition
IV (the intervention) must be systematically manipulated with the researcher determining when & how the IV conditions change
Each outcome measure (the DV) must be measured systematically over time by more than one assessor
IOA collected in each phase for minimum of 20% of data points
IOA must be 80-90%; kappa=.60 (minimum)
Study must include 3 attempts to demonstrate intervention effect at 3 different points in time or with 3 different phase repetitions
Each phase must have a minimum of 3 data points |
|
|
Term
Basic Single Case Experimental Designs |
|
Definition
Reversal/Withdrawl
Multiple Baseline
Multielement (Alternating Treatments)
Changing criterion (accelerating or decelerating) |
|
|
Term
Types of Reversal Designs |
|
Definition
ABAB (main effects design)
ABAC (main effects)
A-B-B+C-A-C-B+C (interaction effects design) |
|
|
Term
|
Definition
Across participants Across settings Across behaviors |
|
|
Term
Threats to internal validity |
|
Definition
Ambiguous temporal precedence (A then B then A then B)
Selection (each participant exposed to all conditions)
History (phase repetition/replication-ABAB)
Maturation (3 replications of effect at 3 different points in time)
Statistical regression (repeated measurement of DV over sessions/days/phases)
Attrition (it happens—people die, move, get in bar fights, get arrested, or otherwise disappear)
Testing (reactivity effects) this is a bigger threat than the others
Instrumentation (observer drift, bias, complexity of code) this is a bigger threat than the others |
|
|
Term
Historical Antecedents of RTI
National Research Council Report (1982) on validity of SPED |
|
Definition
Validity of SPED evaluated on basis of 3 criteria - Quality of general education program
- Value of special education program in producing important student outcomes
- Accuracy & meaningfulness of assessment process in identification of disability
First 2 criteria emphasized quality of instruction
Third criterion emphasized evidential & consequential bases for test use & interpretation,
The third criterion is basically a validity issue (Messick, 1995) |
|
|
Term
Historical Context of RTI
PL 94-142 Education of All Handicapped Children Act of 1975 |
|
Definition
Free appropriate public education (FAPE)
Least restrictive environment (LRE)
Protection in Evaluation Procedures (PEP) – no single test or measurement should be used to place in SPED, tests should be psychometric, valid, etc
Individualized Education Program (IEP) – appropriate is defined by the IEP |
|
|
Term
?Historical Context of RTI: Learning Disabilities Initiative (2000, Office of Special Education Programs OSEP) |
|
Definition
Planning meeting for discussing SLD – needed a meeting bc LD is new (1963), over ½ of kids in sped are LD, and no good definition of LD
Synthesize & organize most current & reliable research on key issues in SLD
Brought together 18 diverse group of stakeholders
Selected 9 issues for white papers
Identified potential authors & potential respondents |
|
|
Term
When did term LD come about? |
|
Definition
|
|
Term
Historical Context of RTI
Learning Disabilities Summit (8/27, 8/28, 2001) |
|
Definition
Invitation-only LD Summit in Washington, DC Each of 9 white papers presented 3 times to different audiences Historical Perspective (Hallahan & Mercer) Early Identification of LD (Jenkins & O’Connor) Classification of LD (Fletcher et al.) LD As Operationally Defined by Schools (MacMillan & Siperstein) Discrepancy Models in the Identification of LD (Kavale) Responsiveness to Intervention in Identification of LD (Gresham) Direct Assessment of Processing Weaknesses in LD (Torgesen) Clinical Judgments in identifying & Teaching LD (Wise & Synder) LD versus Low Achievement Meta-Analysis (Fuchs et al.) |
|
|
Term
Historical Context of RTI
Stakeholder Roundtables |
|
Definition
First roundtable (October, 2001) - Select group of researchers including white paper authors - Shared information & debated issues - Implications for research, policy, & practice - Consensus paper (we did have some detractors on certain issues)
Second Roundtable (November, 2001) - Representatives from all national LD organizations (NJCLD, LDAA, DLD-CEC) - Responded to white papers consensus chapter |
|
|
Term
Historical Context of RTI
President's Commission on Excellence in SPED (2002) Findings |
|
Definition
Finding 1: Focus on process rather than results
Finding 2: Rare use of prevention & intervention
Finding 3: Two educational systems: General Education versus Special Education
Finding 4: Current system often fails children and parents
Finding 5: Compliance to law driven by litigation pressures rather than what’s good for kids
Finding 6: Current methods of identification lack validity
Finding 7: Special education requires highly qualified teachers |
|
|
Term
|
Definition
Individual With Disabilities Education Improvement Act of 2004
Signed into law by George W. Bush |
|
|
Term
|
Definition
Early Identification Academic & Behavioral Difficulties - Poor readers at end of 1st grade will be poor readers at end of 5th grade - Conduct/antisocial behavior by end of 3rd grade predicts lifetime antisocial behavior - Longer behavior persists, the more resistant behavior is to intervention
Risk versus Deficit Approach - Screen all students for risk (mammogram-PAP-PSA-colonoscopy) - Refer-Test-Place operates under deficit model—”wait to fail”
Reduction of Identification Biases (gender-SES-minority group membership)
Focus on Student Outcomes - Positive student outcomes - Direct measurement of achievement, behavior, & instructional environment |
|
|
Term
Problem Solving Approach in RTI (Interview schedules) |
|
Definition
Problem Identification: What is the problem?
Problem Analysis: Why is the problem occurring?
Plan Design/Implementation: What should be done about the problem?
Plan Evaluation: Did the plan work in changing behavior? - Data-based decision making - Goal attainment - Social validation |
|
|
Term
|
Definition
Universal Interventions (vaccinations) - All students - Schoolwide or classwide - 80-85% respond favorably
Selected Interventions (go to the Dr.) Some students - Classroom-based strategies (via consultation/problem solving) - Small group interventions (protocol-based) - 10-15% will respond
Targeted/Intensive Interventions (go to a specialist) - Individualized interventions - Intense & powerful - Function-based interventions - 3-5% will respond |
|
|
Term
5 Fundamental Principles of RTI |
|
Definition
1) Intensity of intervention matched to degree of unresponsiveness to intervention
2) Movement through levels based on inadequate response
3) Decisions regarding movement through levels based on empirical data collected from variety of sources
4) Increasing body of data collected to inform decision-making
5) SPED & IEP determination considered only after student shows inadequate response to most intensive interventions available |
|
|
Term
Essential Features of RTI |
|
Definition
Tx Validity (assmt directly linked to intervention)
Progress Monitoring (direct measurment of intervention response)
Tx Integrity (integrity of intervention directly measured)
Durability (maintenance of effects evaluated)
Consumer Satisfaction (social validation of intervention/outcomes) |
|
|
Term
|
Definition
Preventive RTI - Universal prevention (primary prevention)—To prevent problems - Universal screening 3 times per year - Multiple gating procedures – assessment, recommendation by teacher, need to pass multiple
Reactive RTI - Selected interventions (secondary prevention)—To reverse problems - Replaces refer-test-place approach - Moves from assessment-oriented to intervention-oriented practices
Eligibility RTI - Most intense (tertiary prevention)—To reduce problems - Used to make eligibility determinations - Disability versus need decisions |
|
|
Term
|
Definition
Degree to which intervention is implemented as intended or planned
Other terms: - tx fidelity - procedural reliability - compliance - adherence
Logic of Tx Integrity Concept = observed changes in DV MUST be attributed to changes in IV - Best way to ensure this is to measure extent to which tx is implemented |
|
|
Term
3 Dismensions of Tx Integrity |
|
Definition
Tx Adherence
Interventionist Competence
Tx differentiation |
|
|
Term
|
Definition
Degree to which an intervention is implemented as planned or intended |
|
|
Term
Interventionist Competence |
|
Definition
Interventionist's skill and experience in implementing a particular tx |
|
|
Term
|
Definition
Extent to which interventions differ on critical dimensions |
|
|
Term
Breakdown in any of the 3 dimensions of tx integrity require different courses of action depending on which area is the problem |
|
Definition
Adherence: Performance feedback & systematic monitoring of treatment
Competence: Training & feedback to increase competence
Differentiation: Specification of theoretical differences among treatments |
|
|
Term
How common is tx integrity assessed in ABA? |
|
Definition
Peterson et al. (1982) reviewed 539 studies in JABA 1968-1980 - Only 20% of studies reported data on treatment integrity - 16% of these studies did not operationally define the IV (the treatment)
Gresham, Gansle, & Noell (1993) reviewed 158 JABA studies 1980-1990 - Only 16% (25 studies) provided treatment integrity data - 32% provided operational definition of the IV (the treatment)
Wheeler et al. (2006) reviewed 60 studies on autism 1993-2003 - Only 18% reported data on treatment integrity - 92% provided operational definition of IV
McIntyre et al. (2007) reviewed 152 JABA studies 1991-2005 - 30% provided data on treatment integrity - 95% operationally defined the IV - 39% of studies considered to be at no risk for treatment inaccuracies - 45% of studies considered to be high risk for treatment inaccuracies |
|
|
Term
Take homes from tx integrity articles in ABA |
|
Definition
Reporting of integrity data stable & low from 1968-present
Some (many ABA types) argue that behavior change ensures accurate treatment implementation (????)
Other Agencies Now Mandate Collection of Integrity Data - American Psychological Association (Divisions 12, 16, 53, & 54) - U.S. Department of Education (Institute of Educational Sciences) - National Institutes of Health |
|
|
Term
Ways to measure of Treatment Integrity |
|
Definition
Direct observations of treatment implementation
Checklist assessment based on observation (Likert scale from low to high)
Self report/monitoring of treatment integrity
Permanent product recording of treatment implementation |
|
|
Term
How high does integrity have to be? Is 100% required in all cases? |
|
Definition
Some problems & participants—70% integrity might be good enough
Other problems & participants might require 90% integrity
Possible Solution: Treatment Integrity Effect Norms - Treatment effect norm is average outcomes of given intervention (meta-analysis) - Similar logic could be used to catalog which interventions at what levels of integrity measured by what methods produce given levels of outcomes - Good Behavior Game: Might find that 80% integrity as measured by direct observations is required to produce socially valid reductions in disruptive behavior (ES=0.85) |
|
|
Term
Study Questions given by Gresham |
|
Definition
Describe evidence-based practices. How is evidence classified?
Describe and contrast the 4 basic research approaches for establishing scientific evidence.
Describe the logic of single case experimental designs. How do they differ from group experimental designs? Give an example of each of the 4 basic single case experimental designs.
How do single case experimental designs meet internal validity standards?
What are the 7 quantitative features of single case experimental design data?
How do single case experimental designs control for threats to internal validity?
What is RTI? Describe 4 advantages of RTI over the “refer-test-place” approach
What is the 3-tier model? Describe 3 types of RTI.
What is treatment integrity? What are its dimensions? How common is integrity assessment in ABA? How is integrity measured? |
|
|