Term
The 8 basic characteristics of all single-subject designs, and descriptions. |
|
Definition
1. Specify the target (problem or objective/goal) - What are we going to try and change? 2. Measure the target by forming its operational definition -specifying what operations or measurement procedures we will use to define the target (how often it occurs, in what intensity, etc) so that both client and practitioner know what they are dealing with. 3. Baseline and intervention phases -baseline= target(s) measured + no intervention (trying to understand the nature and extent of problems) -intervention= one or more target focused intervention practices are introduced 4. Repeated measures -the heart of SSD is the collection of repeated info on the target problems or objective (time-series design) 5. Practice design - Practice design is the sum of systematic and planned interventions chosen to deal with a particular set of targets to achieve identified objectives and goals. 6. Evaluation design -special types of research designs that are applied to the evaluation of practice outcomes 7. Analysis of data (visual) -monitoring: reading data for improvement, deterioration, or no change in order to maintain or revise SSD. -evaluation: a visual analysis of data that reveals marked positive differences between baseline and intervention is sometimes used as the basis for overall accountability 8. Decision-making on the basis of findings -The ultimate purpose of doing SSD evals is to be able to make more effective and humane decisions about resolving problems or promoting desired objectives |
|
|
Term
T or F: Practice evaluation is the systematic, ongoing, and objective assessment of social work practice outcomes. |
|
Definition
|
|
Term
Define the term objective. |
|
Definition
|
|
Term
True or False: Continuous, ongoing observation is accomplished using repeated measures during baseline, treatment, and follow-up. |
|
Definition
|
|
Term
Repeated measures are used to assess: a. the same behavior/target problem b. change over regular periods of time (such as every day or every week) c. whether changes are taking place before, during, and after treatment d. all of the above |
|
Definition
|
|
Term
The intensive study a single organism/individual is called: a. single-subject research b. single-system research c. group research d. just a and b e. all of the above |
|
Definition
|
|
Term
Describe the purpose of the baseline phase of evaluation. How do the baseline and intervention/treatment phases differ? |
|
Definition
Baseline serves as an assessment used to predict what would happen in the future if there was no treatment. |
|
|
Term
Sally currently is treatment a 55-year-old woman who has been diagnosed with major depression. Sally instructed her client to complete the Beck Depression Inventory (BDI) every other day before Sally introduced cognitive-behavior therapy. What should Sally do with the scores from her client's BDI? How will Sally analyze the data to determine whether her client is improving? |
|
Definition
Plot scores on a graph and visually analyze the data. |
|
|
Term
True or False: Measurement may be defined as the process of assigning labels (such as numbers) to things according to rules. |
|
Definition
|
|
Term
True or False: Measurement is not needed in order to carry out a single-system evaluation. |
|
Definition
|
|
Term
Some advantages of measurement are: a. allows the practitioner to monitor changes in the target b. does not require talking to the client about his or her treatment c. does not require that the practitioner develop criteria for assessing change d. all of the above |
|
Definition
a. allows the practitioner to monitor changes in the target |
|
|
Term
Test-retest reliability: a. is determined by retesting a group with an alternative instrument b. is an attempt to evaluate systematic measurement error c. both a and b d. examines the stability (or consistency) of a measure on two different occasions |
|
Definition
d. examines the stability (or consistency) of a measure on two different occasions |
|
|
Term
If the instrument you are considering adequately samples the kinds of things (e.g. potential items on a test) about which conclusions are to be drawn, the measure would have good __ validity. |
|
Definition
|
|
Term
Describe how you might determine the face validity of an instrument designed to assess child maltreatment. |
|
Definition
-If there is social acceptability -If the client and practitioner think the tool is asking the appropriate questions |
|
|
Term
Describe why it is important to consider random and systematic error. |
|
Definition
We want to know how much measured or observed change in a client's problem is due to actual change in the problem. The basic issue is that measurement error can lead to erroneous practice decisions and less effective practice. |
|
|
Term
True or False: Direct measures, such as direct observation instruments, assess the actual situation, problem, or behavior of concern. That is, the observations or measurements are directly representative of how the client actually behaves. |
|
Definition
|
|
Term
True or False: A target refers to the specific object of preventative or intervening services that is relevant to the client. |
|
Definition
|
|
Term
Targets may be vague, as long as they are relevant to the client. |
|
Definition
|
|
Term
Goals/outcomes always should be selected prior to assessment so you can focus more specifically on the variables of interest to you rather than the client. |
|
Definition
|
|
Term
The focus in thinking about measurement of problems is on "what" and "when" issues rather than "why" issues. |
|
Definition
|
|
Term
A teacher reports to the school social worker that Amy never sits in her chair during her math class. The school social worker replies, "That's because Amy has ADHD." The school social worker has __ the problem by assigning cause to a label or construct. |
|
Definition
|
|
Term
Describe some implications for treatment and measurement of Amy's problem, as described by the school social worker. |
|
Definition
- might not be accurate - has to take medication - focused on ADHD rather than on client |
|
|
Term
True or False: Only the client really can collect data on the client's feelings, thoughts, and actions (behaviors) |
|
Definition
|
|
Term
True or False: Only the client really can collect data on the client's thoughts/cognitions. |
|
Definition
|
|
Term
The vertical (Y) axis line on a chart always refers to: a. time b. the target c. parameters of reliability d. none of the above |
|
Definition
|
|
Term
The horizontal (x) axis line on a chart always refers to: a. time b. the target c. parameters of reliability d. none of the above |
|
Definition
|
|
Term
For charting targets, you can use: a. numbers b. percentages c. rates d. all of the above e. none of the above |
|
Definition
|
|
Term
True or False: "Behavior" refers to observable human activities. |
|
Definition
|
|
Term
True or False: Training clients or others to collect the data you want is an absolutely necessary step in getting usable data. |
|
Definition
|
|
Term
If you observes for 15 minutes on day 1, 20 minutes on day 2, 35 minutes on day 3, and 60 minutes on day 4, you would transfer the raw data into a __. |
|
Definition
|
|
Term
"Acceptable inter-observer reliability is over 85%." Describe what this statement means. |
|
Definition
Extent to which two people agree something did or did not occur. |
|
|
Term
Counting the number of cigarettes smoked is an example of a __ measure. a. direct observation b. individualized rating scale c. standardized questionnaire |
|
Definition
|
|
Term
True or False: An individualized rating scale (IRS) is tailor-made for each client and situation, as the need arises. |
|
Definition
|
|
Term
True or False: The following would be an acceptable rating scale for an IRS: Intensity of Joe's depression: 5 = very sad, to the point of quitting school 4 3 = satisfied with his performance at school 2 1 = very satisfied with his performance at work |
|
Definition
|
|
Term
True or False: The main trouble with individualized rating scales is that they have little face validity. |
|
Definition
|
|
Term
List the advantages of standardized questionnaires |
|
Definition
- pre-tested for reliability and validity - structured way of eliciting important and comprehensive information from client - SQs are efficient, simple to use, and easily accessible |
|
|
Term
True or False: The interaction log is used when the problem clearly involves verbal exchanges between the client and significant others. |
|
Definition
|
|
Term
Will self-monitoring of positive or desirable behavior will tend to increase or decrease reactivity? |
|
Definition
|
|
Term
Watching a client or a family interaction from behind a one-way mirror, unknown to the client or family, is an example of: a. reactive measure b. obtrusive measure c. client log d. all of the above e. none of the above |
|
Definition
|
|
Term
True or False: A "measurement package" is a unique combination of measures specifically designed for each client. |
|
Definition
|
|
Term
True or False: Baseline, treatment, and follow-up are all types of phases in single-subject designs. |
|
Definition
|
|
Term
Explain how a practitioner would establish the generality of their intervention. |
|
Definition
- Replicating the results from one case, problem, setting, or situation to another. - Probabilities: Using the binomial distribution can be used to determine the number of successful replications needed for evidence of generalization. You can find available data and calculate the percentages of successes over a number of studies. - Meta-Analysis: a method for quantitatively aggregating, comparing, and analyzing the results of several studies. |
|
|
Term
If a chart shows wildly fluctuating data in the A phase, the clearest problem in interpreting that design is: a. lack of stability b. order effects c. concomitant change d. all of the above |
|
Definition
|
|
Term
What are the 4 methods of data collection? |
|
Definition
1. Direct Observation 2. Individualized Rating Scales 3. Standardized Questionnaires 4. Logs |
|
|
Term
What are the Advantages of Direct Observation? |
|
Definition
-The instrument is flexible and allows measurement of behavior whether it is discrete or continuous -Instrument can record several different behaviors at the same time -Data can easily be converted to percentages -Provides an indication of changes in frequency or duration |
|
|
Term
What are the disadvantages of Direct Observation? |
|
Definition
-Requires undivided attention of observers
-Instrument provides an estimate of frequency and duration, but not a direct determination
-The duration of the intervals is sometimes difficult to establish |
|
|
Term
When would an Individualized Rating Scale be selected? |
|
Definition
Might be selected when behaviors are difficult, impossible, or undesirable to count
Target may not involve behavior, or behaviors are too difficult, imprecise, or infrequent to count
No one is available to count behaviors |
|
|
Term
What are the advantages/characteristics of an individualized rating scale? |
|
Definition
Tailored to individual to measure what has been identified as important to the client
Allows flexibility in targets that can be measured
Allows flexibility in who can provide the information: client, practitioner, concerned others, independent evaluators
Don’t require much time to administer and score
IRSs can be used to measure intensity of target (e.g., pain)
IRSs can be used to measure private events (e.g., internal thoughts and feelings
IRSs can be used to measure change over time |
|
|
Term
What are the disadvantages of an Individualized Rating Scale? |
|
Definition
Reliability and validity for particular IRSs have not been established
Client may be unwilling to use an IRS |
|
|
Term
How does a therapist construct an Individualized Rating Scale? |
|
Definition
Prepare client by operationally defining the target (e.g., feelings)
Select the rating dimension (e.g., intensity, frequency, importance, seriousness, etc.)
Select the number of responses (e.g., feelings about child, parent, spouse) and categories (e.g., love, anger, resentment)
Create equidistant categories (make sure intervals are of equal size), usually 5 or 7-point Likert scale is chosen
Create anchors by identifying specific labels or descriptions for what the numbers mean (provide labels for 1 and for 5/7)
Construct overall summary score by averaging scores on individual questions |
|
|
Term
What are the 3 different types of time samplings? |
|
Definition
Whole-Interval Time Sampling Behavior must occur during the entire interval
Partial-Interval Time Sampling Behavior must occur once during interval
Momentary Time Sampling Observe and record occurrence or nonoccurrence of the behavior at the end (or beginning) of the interval |
|
|
Term
How do you specify a target? |
|
Definition
-Decide how many behaviors to record -Decide who should collect data -Decide when and where to record -Train the observer -Collect baseline data |
|
|
Term
What are the 6 characteristics of Standardized Questionnaires? |
|
Definition
Can be used to measure feelings, thoughts, behaviors, etc.
Reliability and validity have been demonstrated
Vary in their structure, but usually contain many questions rated along some dimension
Vary in the number of concepts they measure
The informant often varies (e.g., client, practitioner, concerned other)
SQs vary in terms of time, effort, and training needed to interpret the instrument |
|
|
Term
What are the Advantages of Standardized Questionnaires? |
|
Definition
Pretested for reliability and validity
Structured way of eliciting important and comprehensive information from client
SQs are efficient and are simple to use |
|
|
Term
What are the Disadvantages of Standardized Questionnaires? |
|
Definition
Validity of questionnaires can be questioned
You won’t know all of the information concerning reliability and validity
Your client might differ from the group of individuals used during its creation
Target of questionnaire may not correspond to target of your client
Concepts might be too general, and only indirectly related to your client
Too many questionnaires will overwhelm the client, and he or she won’t take his or her responses seriously |
|
|
Term
What process do you use in selecting a Standardized Questionnaire? |
|
Definition
Determine the purpose of the questionnaire Select a questionnaire that is directly related to your client’s target Determine relevance to treatment planning Will it show that target is improving or deteriorating? Will it provide information for treatment planning Examine information concerning reliability (consistency of measurement) Are the scores stable during baseline? Examine information concerning validity Is the questionnaire measuring what its name implies? Do the questions represent all of the areas that should have been included? Is the questionnaire easy to use? Is the questionnaire accessible? |
|
|
Term
What are the 4 functions of a Client Log? |
|
Definition
Pinpoint and define client problems Clarify the dynamic and unique contexts in which problem occur Logs can be incorporated into other measurement systems Evaluate change over time Serve a preventative or interventive function by teaching the client to focus on his or her unique situation |
|
|
Term
What are the two types of variation logs? |
|
Definition
Time Variations (Preset and Open) Target-Category Variations (Exploratory, Target Problem, Interaction, Evaluation) |
|
|
Term
Explain the Preset Time Log. |
|
Definition
Established periods when logs will be completed
You have some idea when target will occur
You need information concerning client’s activities throughout the day |
|
|
Term
Explain the Preset TIme Log. |
|
Definition
Also called critical incident recording
Events are recorded as soon as they occur
Period of time between each recording might vary
Deals only with data that is perceived by client to be important
But client is screening out potentially important events |
|
|
Term
What is an Exploratory Log? |
|
Definition
Describes important incidents, but also explores client’s satisfaction with situation and how they might behave differently in the future |
|
|
Term
What is a Target-Problem Log? |
|
Definition
Identifies antecedents and consequences of target problem. |
|
|
Term
What is an Interaction Log? |
|
Definition
A type of Target Problem Log.
Identifies antecedents and consequences of target problem |
|
|
Term
What is an Evaluation Log? |
|
Definition
A type of Target Problem Log.
Records incident, but also client’s reaction to it. |
|
|
Term
What are the advantages of client logs? |
|
Definition
-allow pinpointing of client targets -allows better understanding of client's life (day-to-day) -allows client and clinician to better monitor and evaluate target progress -allows better understanding of how the intervention methods are working |
|
|
Term
What are the disadvantages of client logs? |
|
Definition
-needs client literacy, capability/willingness, and sufficiently disciplined -much prep is needed in advanced -clients can become tired (quickly) of them |
|
|
Term
Define Measurment Package. |
|
Definition
Measurement package is the measures that would be best suited to the specific target(s) involved in a given case, that would most adequately measure the changes expected to occur, and that would evaluate those changes from the points of view of those who are involved. |
|
|
Term
What are the 12 Guidelines for constructing a measurement package? |
|
Definition
1. Try to use more than one measure; 2. Try not to duplicate measures (e.g., two different self-reports measures of depression) just for the sake of having multiple measures; select measures that use different measurement methods, that measure different targets, that measure different dimensions of the same target, or that are based on info obtained from different sources. 3. Try to include at least one direct measure, especially if an indirect measure is being used. 4. Try to select the most direct measures. 5. Try to include the observation of overt behaviors (actual functioning)-whether in the natural environment or in a contrived setting-whenever applicable. 6. Try to give priority to situation-specific measures rather than global measures (e.g., a measure of social anxiety related to a problem in heterosexual relationships, rather than a measure of general anxiety). 7. Try to include at least one nonreactive measure, especially if a reactive measure is being used. 8. Try not to overburden one person with all the measurement responsibilities. 9. Try to obtain measurement info from more than one person (e.g., client and relevant other). 10. Try to assess what type of change is more crucial in a given case, so that measurement can be geared toward assessing the most appropriate change. 11. Try to select measures that are tailored to the specific client, situation, and target. 12. Try to focus measurement resources on measuring the primary target(s) that brought the client to your attention. |
|
|
Term
What is a "phase", within research design? |
|
Definition
baseline and intervention periods (phases)
Baselines are used to predict the future pattern of behavior given no intervention
Intervention is what the practitioner does to bring about a behavior change in the client
Phases are labeled with letters: A = baseline B = intervention technique (or a set of distinctive techniques used at the same time (package)) C = intervention technique (used at a different time) |
|
|
Term
What are some factors that complicate the interpretations of changes within phase? |
|
Definition
Carryover effects Effects of one phase carryover to the next Complicates interpretation of effects of your intervention
Contrast Clients will react differently to a new intervention simply because it is novel
Order of presentation Order of interventions can have a causal impact
Incomplete data Phases are not carried out for long enough periods of time
Training phase Baseline and intervention are separated by a training or learning phase |
|
|
Term
What are some threats to internal validity? |
|
Definition
History Events occur outside of client contacts
Maturation Psychological or physiological changes occur
Testing Testing sensitizes clients so that subsequent scores are influenced
Instrumentation Changes in the way measurement device is used
Dropout Results are dependent on a small sample of data after participants drop out
Regression to the mean Extreme scores become less extreme upon retesting
Diffusion of intervention Clients may learn information intended for others |
|
|
Term
Explain External Validity. |
|
Definition
The extent to which the effect of an intervention can be generalized
Concerned with the extent to which clients, settings, problems, and practitioners are representative
Replication establishes the generalizability of interventions across clients, settings, problems, and practitioners |
|
|
Term
What is the purpose of a baseline? |
|
Definition
Serves as a basis for comparison with information from intervention phase Represents what could be expected to occur if no intervention happened Represents an attempt to find out how often or how long the problem occurs
Serves as an assessment tool Aids in discovering facts about the client, problem, and situation What factors are affecting or maintaining the problem?
Serves as a source of information to consider when choosing intervention strategies Helps you choose specific therapy techniques |
|
|
Term
What are the rules regarding the length of the baseline phase? |
|
Definition
Baseline phase should continue until the data are useful That is, baseline should continue until clear decisions can be made based upon that data No clear guidelines exist for determining how much data is enough
Baseline should continue until the data points appear to be stable Data do not show obvious cycles or fluctuations Variability makes assessment difficult Variability will make evaluation difficult Baseline should predict what would happen to data if no intervention occurred
At least 3 observations are necessary to establish a pattern |
|
|
Term
What are the strengths of a case study design? |
|
Definition
Fosters clinical speculation and innovation
Easily administered in any situation
Can confirm or undermine a theory by giving immediate feedback
Permit investigation of rare phenomena without aggregating large amounts of data |
|
|
Term
What are the limitations of a case study design? |
|
Definition
Active ingredient cannot be determined
Practitioner is casual in specifying problem
Specific procedures are not clearly identified, making replication difficult |
|
|
Term
What are the strengths of the AB Design? |
|
Definition
Reveal clearly whether a change has occurred
Lets practitioner know whether her or she should change or modify the intervention
Provides evaluation data – whether positive outcomes are being achieved |
|
|
Term
What are the Limitations of the AB Design? |
|
Definition
AB designs do not provide strong evidence that intervention caused the observed change
AB designs do not permit control of alternative explanations Scientists are constantly seeking out alternative explanations for results |
|
|
Term
What is the significance of an ABA Design? |
|
Definition
With the addition of the second A phase, we have created an experimental design The term experimental implies that a planned phase change has occurred Allows practitioner to search for patterns in the data
ABA design allows a comparison between first A and B, and a comparison between B and second A
If B is the causal ingredient, then its removal should return target to original baseline level
ABA designs allow practitioner to study maintenance of treatment effects |
|
|
Term
What are the strengths of an ABA design? |
|
Definition
ABA offers stronger bases for inferring causality
Replication across participants (using the same design) also strengthens argument that intervention is the causal ingredient |
|
|
Term
What are the Limitations of an ABA Design? |
|
Definition
Practitioner must end evaluation process on a nonintervention phase
Practitioner must remove a successful intervention |
|
|
Term
What is the significance of the ABAB Design? |
|
Definition
In the ABAB design, the intervention is manipulated twice Strengthens the causal nature of the intervention
ABAB designs terminate during an intervention phase Professionally more acceptable than an ABA design
ABAB designs control for history (context), maturation, regression toward the mean, and some forms of reactivity Reactivity because of self-monitoring |
|
|
Term
What are the strengths of the ABAB Design? |
|
Definition
Strongest form of support for causal efficacy of treatment intervention
One form of direct replication of treatment effects in the same individual
Ethically suitable by ending treatment during a treatment phase |
|
|
Term
What are the Limitations of the ABAB Design? |
|
Definition
Time-consuming compared to AB design
Impossible to keep experimental phases the same length
ABAB designs are subject to carryover effects |
|
|
Term
What is the significance of Multiple Baseline Design? |
|
Definition
Designs for work with several clients, targets, or settings
Involves intervening with one (client, target, setting) while holding the others constant
The effects are demonstrated by introducing the intervention to different baselines
Allows the practitioner to make causal inferences without removing the intervention
If each baseline changes when the intervention is introduced, the effects can be attributed to the intervention rather than to extraneous events
No need to return behavior to baseline levels of performance |
|
|
Term
How do you implement multiple baseline design? |
|
Definition
Collect baseline for each element until performance is stable Baselines begin simultaneously
Intervention applied to the first target
Data continue to be gathered for each behavior These behaviors should remain at baseline levels
If first target changes and others remain stable, change can be attributed to the intervention
Performance of first target stabilizes, intervention is applied to second target Data continue to be gathered for each remaining target
Behaviors should change only when the intervention is applied
Extraneous variables might have influenced performance, but one would not expect these events to coincide with the onset of the intervention
And one would not expect these events to affect only one behavior
The other targets serve as control conditions to evaluate what changes can be expected without the intervention
Repeated demonstrations provide convincing support that intervention was responsible for change
The greater the number of baselines (behaviors, persons, settings), the greater the strength of the demonstration |
|
|
Term
What are the advantages of Multiple Baseline Design? |
|
Definition
Multiple baseline designs can be used to evaluate changes in several targets Problems can be prioritized Problems requiring immediate attention can be addressed first
Allows the practitioner to assess changes across settings Useful for testing generalizability of changes across locations
Useful when carryover and irreversibility are problematic
Does not present ethical problems of removing an effective intervention
Easy to use |
|
|
Term
What are the limitationss of Multiple Baseline Design? |
|
Definition
Stable baselines cannot be obtained for each target Wait for unstable targets to stabilize and intervene upon stable baselines
Order of intervention Deal with targets in a random order across subjects
Requires the use of the same intervention across different problems
Interdependence of the baselines Each baseline changes when the intervention is first introduced Select targets that are independent Use many baselines Introduce a reversal
Inconsistent effects The intervention may produce inconsistent effects Results may be too ambiguous to interpret |
|
|
Term
What is Changing Criterion Design? |
|
Definition
The effect of the intervention is demonstrated by showing that behavior gradually changes over the course of the intervention
Behavior improves in increments to match criterion for performance that is specified by the intervention
Required level of performance is altered repeatedly over the course of the intervention Used to improve performance over time
Effects of intervention are shown when performance repeatedly changes to meet the criterion |
|
|
Term
What are the Strengths of the Changing-Criterion Design? |
|
Definition
Intervention not withdrawn or withheld
Does not require more than one target
May motivate client since design requires setting intermediate steps toward goal
Easy to use |
|
|
Term
What are the Limitations of a Changing-Criterion Design? |
|
Definition
Initial baseline must be stable Must show sequential changes are not already occurring
Carryover and historical variables make data difficult to interpret
No clear rules about number of criterion shifts or magnitude of shifts |
|
|
Term
What is a Multi-Elemental Design? |
|
Definition
Compare the effects of two different interventions on one target problem
Two interventions are rapidly alternated so that their effects can be compared quickly
The first phase of the design is baseline
In the second phase, the interventions are rapidly alternated in a counterbalanced fashion
Counterbalancing involves presenting each intervention an equal number of times, but the order of their presentation is random |
|
|
Term
What are the Strengths of a Multi Elemental Design? |
|
Definition
Allows comparison between baseline and intervention
Also allows comparison between two interventions This comparison can be made because the intervention phases are adjacent
Design is flexible Allows more than one intervention to be compared
Design does not require removal of the intervention |
|
|
Term
What are some Limitations of a Multi Elemental Design? |
|
Definition
Design does not rule out contrast effects Client might be aware that two interventions are being applied
Possibility of order effects and carryover effects Will you achieve similar results without alternations?
Question about ability to use the most useful intervention by itself
Interventions must be alternated until the effects become clear Some practitioners may not be willing to carry out the alternations for the appropriate length of time |
|
|
Term
What is an Interaction Design? |
|
Definition
Used to sort out differential effects of multiple intervention packages Compares adjacent interventions in a logically controlled manner
Elements of an intervention are studied separately and in combination to determine the interactive effects |
|
|
Term
What are the Two types of Interaction Designs? |
|
Definition
Strip designs involve starting with the intervention package, and then removing the elements
Additive designs involve testing the elements alone and then in combination |
|
|
Term
What are the Strengths of Interaction Designs? |
|
Definition
Designs allows you to determine the differential effects of the elements of an intervention package
Encourages practitioners to try out different interventions |
|
|
Term
What is the limitation of Interaction Designs? |
|
Definition
Implementation requires a great deal of time and control over the relevant variables |
|
|
Term
During Analysis of Graphic Data, what are you trying to distinguish? |
|
Definition
Distinguishing effort, effectiveness, and efficiency |
|
|
Term
What is "Effort," in the analysis of graphic data? |
|
Definition
The work that goes into a service program, include: amount of time spent with client amount of money provided mileage distance traveled
Cost figures of efforts must be calculated |
|
|
Term
What is "effectiveness", in the analysis of graphic data? |
|
Definition
Changes in target that come about as a result of service program Based on comparison of target before, during, and after treatment Involves observation and analysis of observable client changes |
|
|
Term
What is "efficiency", in the analysis of graphic data? |
|
Definition
Changes in target that come about as a result of service program Based on comparison of target before, during, and after treatment Involves observation and analysis of observable client changes |
|
|
Term
What are the two types of Significance? |
|
Definition
PRACTICAL significance: Also called clinical significance and social validation
Somebody must believe that a meaningful change has occurred
Methods of determining practical significance: Average functioning of peers Subjective impression of relevant others Cultural norms and values Goal set by client and practitioner
THEORETICAL significance Does client’s target change in the direction predicted by the theory?
Theories provide clear expectations for the patterns that are likely to occur |
|
|
Term
|
Definition
Examination of the effects of the intervention at different points over time
Effects of the intervention are replicated at different points so that judgments can be made based on the overall pattern of data
The manner in which intervention effects are replicated depends on the specific design
Baseline is used to predict future performance, and subsequent applications of the intervention test whether the predicted level is violated
Data are plotted graphically |
|
|
Term
What are the 4 Properties of Data? |
|
Definition
Level/Magnitude Trend Stability Latency |
|
|
Term
What is Level, and what does it mean when the level changes? |
|
Definition
Magnitude of the variable as indicated by the data at any point
Change in level: Discontinuity between baseline and intervention Usually focuses on change between last data point from the end of the one phase and the beginning of the next phase Indicates intervention cause change in the target |
|
|
Term
|
Definition
Clear predictability from a prior period to a later one Usually within the same phase |
|
|
Term
|
Definition
Directionality of the data
Trends can be: Increasing Decreasing Flat (no trend) Variable |
|
|
Term
|
Definition
Period between onset of one condition and changes in performance
Short latencies indicate intervention cause change |
|
|
Term
What are some problems in visual inspection of data? |
|
Definition
How data are represented on the graph can distort the image Distortions are artificially produced
Bimodal patterns (two separate peaks) are difficult to interpret
Data are extremely variable
Two observers do not agree on how to interpret the graphs Changes should be clear so that two observers do agree |
|
|
Term
What are some Complicating Factors in Analyzing the Results of Single-Subject Evaluation? |
|
Definition
Carryover effects Effects of one phase carryover to the next Complicates interpretation of effects of your intervention
Contrast Clients will react differently to a new intervention simply because it is novel
Order of presentation Order of interventions can have a causal impact
Incomplete data Phases are not carried out for long enough periods of time Training phase Baseline and intervention are separated by a training or learning phase |
|
|
Term
What are some problems in visual inspection of data? |
|
Definition
How data are represented on the graph can distort the image Distortions are artificially produced
Bimodal patterns (two separate peaks) are difficult to interpret
Data are extremely variable
Two observers do not agree on how to interpret the graphs Changes should be clear so that two observers do agree |
|
|
Term
|
Definition
Present the interventions an approximately equal number of times, but varying the order in presentation so as not to effect results. |
|
|