Term
Components of Communication |
|
Definition
1. Speech
2. Language
3. Pragmatics
4. Voice
5. Fluency
6. Hearing |
|
|
Term
|
Definition
- Problem vs. no problem?
- Quick, about 15-20 minutes
- interaction involved, while observing
- gear tools for age
|
|
|
Term
|
Definition
Process of arriving at a diagnosis
-ASHA's definition:
data collection + interpretation + decision making
|
|
|
Term
|
Definition
Process of collecting valid + reliable info, integrating it, interpreting it, to make judgement or decision
-ASHA's definition
observation, screening, data collection |
|
|
Term
Evaluation vs. Assessment |
|
Definition
-very similar, but
-ASHA thinks that assessment just involves forms of data collection, while evaluation involves data collection, and interpretation & decision making |
|
|
Term
|
Definition
To distinguish/gain thorough understanding of a person's problem
"dia" = apart
"gnosis" = to know |
|
|
Term
|
Definition
Can Do:
- Screenings
- Collect data
- Check equipment
Can't Do:
-perform standard/nonstandardized tests
-make clinical decisions
|
|
|
Term
|
Definition
- specific protocol to assess specific part of communication
- specific guidelines to administer and score
Associate with:
- Standardized
- Static |
|
|
Term
|
Definition
- Does NOT follow specific protocol
- Flexible
- Can vary from directions
Ex: checklists, observations
Association with:
- Non-standaradized
- Dynamic |
|
|
Term
|
Definition
-Formal
-Passive participant [does what he/she is told]
-No help is given, no leeway [unless manual states so]
-not really taking all factors into consideration
-No feedback allowed |
|
|
Term
|
Definition
-Participants become more active
-Help can be given
-Admistration is fluid, not real strict
-Could be child directed
-Feedback can be given |
|
|
Term
2 types of Static Assessment |
|
Definition
1. Normed Referenced
2. Criterion Referenced |
|
|
Term
|
Definition
-Formal, ALWAYS standardized
-commonly used for artic, language, & vocab testing
-each test is administered to normed sample
-compare participant's score to normed sample to determine how far above or below he/she is to average |
|
|
Term
|
Definition
-looks at individuals performance on a particular task
-mastery/competency based testing
-can be formal or informal
-no always standardized
-usually associated with static
*stops being static when help is given or sways from directions |
|
|
Term
Normed Referenced vs. Criterion Referenced |
|
Definition
-Normed focuses on group similarities & compares score to sample/population. ALWAYS standardized, formal
-Criterion focuses on the individual's abilities. Can be either formal/informal
-Both are 2 types of static assessment |
|
|
Term
Static Assessment vs Dynamic Assessment |
|
Definition
-Static is formal, strict, follow specific protocol, no help/feedback can be given, can't sway from directions
-Dynamic is informal, doesn't follow protocol, help/feedback can be given, can sway from directions
Both are types of assessment, & it's best to use a combination of both types |
|
|
Term
|
Definition
-assessment in natural environment
-non traditional, not standardized
-performed in context of real life
-can be used in evaluation or therapy session
-describing behaviors: how that person functions in natural environment |
|
|
Term
|
Definition
-Pre evaluation/prior to assessment
-filled out by parent/guardian
-gives background info, can be detailed or vague
-helps you make up questions/prepare for interview
-helps come up with plan of action
-placed in permanent file of client
|
|
|
Term
3 ways to get info before evaluation |
|
Definition
1. Case History
2. Interview with client and/or caretaker
3. Other professional documents [optional]
Ex: Medical records, Other evaluation reports, etc |
|
|
Term
Purposes of Case Histories |
|
Definition
1. Gain info
2. Write interview
3. Establish plan of action |
|
|
Term
What types of information are in a case history? |
|
Definition
-name -DOB -Address/phone -Referral
-Parents's info -Problem & Causes of -Hearing?
-Family info -risk factors -Mothers prenatal/pregncy
-health/medical history
|
|
|
Term
|
Definition
- the process of verbal & nonverbal intercourse between trained professional and client seeking help
-fill in gaps of case history
-first impression from both sides
-time to allow client/family to question, concerns
*make sure to not let it drag on |
|
|
Term
3 basic purposes of Interviewing |
|
Definition
1. Obtaining info
2. Giving info
3. Providing release and support |
|
|
Term
|
Definition
1. Pre-interview
2. Post-interview |
|
|
Term
|
Definition
-Occur before evaluation
-Perferable face to face
-Collect any info needed, like missing info from case history |
|
|
Term
|
Definition
-Occurs after evaluation
-Discuss strengths & weaknesses
-Establish follow up meeting |
|
|
Term
|
Definition
1. Closed Ended
2. Open Ended
3. Summary Probe |
|
|
Term
|
Definition
-Requires specific & limited responses
-3 types
1. Yes/No
2. Identification (Wh-questions)
3. Closed Ended Choice
-selection, embed choice |
|
|
Term
|
Definition
-recall something in their own words
-events, feelings, experiences
-elaborate
-"why"
-request descriptions
|
|
|
Term
Advantages of Open-Ended Questions |
|
Definition
-more complex issues
-get into pragmatics w/ adults
-prevents clinician from putting words in clients mouth |
|
|
Term
|
Definition
-Check
-summarize/restate what client said w/o realing doing it
-put into question |
|
|
Term
|
Definition
CLD: culturally & linguistically diverse
-pay attention & be aware culture may be different
-it's very important
-may need to research
-testing in dominant language may be necessary
-interpreter may be needed |
|
|
Term
Things to consider if working with a Multicultural child |
|
Definition
- age of acquisition of both languages
- where both languages are being used
- exposure amount to both languages |
|
|
Term
|
Definition
- Case history
- Interviewing
-Choose test measures/tools |
|
|
Term
Things to look at when choosing testing tools |
|
Definition
-Evidence
-family values
-clinician knowledge/experience
-developing good questions
-searching literature
-critiquing evidence
-age of client *VERY IMPORTANT* |
|
|
Term
Age Ranges for Testing Tools |
|
Definition
- Birth to 3
- 3 to 5
- 5 and above |
|
|
Term
|
Definition
- most likely won't tolerate formal testing
-short attention span & won't sit for long time
- informal testing
-play, observe
-checklists, criterion referenced |
|
|
Term
|
Definition
-greater chance of choosing formal test
-MUST be prepared for everything
-formal & informal
-Have back up tests |
|
|
Term
|
Definition
-should be able to do formal testing
-unless attention span is so impaired
[Ex: autism]
|
|
|
Term
Things to do to prepare for giving a test |
|
Definition
-Study test: manual, format, and scoring
-Practice
-Have game plan
-Prepare your materials beforehand
*Try to do something natural for all age groups |
|
|
Term
How do we choose best formal tests? |
|
Definition
-choose tests that mesaure what we are measuring
-choice between norm references vs. criterian referenced
-do we need a generalized score or personal score
-look at manuals
-age appropriate? *Vital importance*
-validity/reliabiltiy [manuals will have info] |
|
|
Term
|
Definition
|
|
Term
|
Definition
|
|
Term
|
Definition
a person with a disorder is accurately identified with a disorder
-test positive for disorder |
|
|
Term
|
Definition
a person who doesn't have a disorder is identified with a disorder
-wrongly identified |
|
|
Term
|
Definition
a person who has a disorder, but tests within a normal range |
|
|
Term
|
Definition
a person who doesn't have a disorder tests within a normal range |
|
|
Term
How does sensitivity and specificity factor into testing tools? |
|
Definition
- newer test manuals have both results
-test need to have a 90% for both |
|
|
Term
|
Definition
- degree to which a procedure actually measures what it intends to measure |
|
|
Term
|
Definition
dependibility of test
consistency of scores upon repeated measurements of same group |
|
|
Term
|
Definition
1. Face validity
2. Construct validity
3. Content validity
4. Criterion related |
|
|
Term
|
Definition
- appears to measure what it says to measure |
|
|
Term
|
Definition
- degree to which a test measures the theoretical construct it is intended to measure
-key to success of test |
|
|
Term
|
Definition
- Careful examination of the content of the test
a: appropriateness of the types of items
b: completness of the item
c: the way in which the iten assesses the content
Ex of a PLS-3: show me the pictures of night/day
- are the words of the dartic test too hard? |
|
|
Term
Criterion related Validity |
|
Definition
-established by the use of an external criterion
Consider:
Concurrent
Predictive |
|
|
Term
Concurrent in Criterion related Validity |
|
Definition
-measure of how individual's current score on one instrument can be used to make an estimate of his current score on another instrument in the same area around the same time
Ex: GFTA compared to the Expressive One Word Test
[used to prove one test is just as good as another test] |
|
|
Term
Predictive Validity
[in Criterion related Validity] |
|
Definition
-how an individual's score can estimate scoring at a later time on a different instrument
Ex: PSAT predicts how you'll do on the SAT |
|
|
Term
|
Definition
-agreement of two independent judges instructions, scoring guidelines when given & scored
-the more vague instructions are, they will affect interjudge
|
|
|
Term
|
Definition
-established if test results are consistent with the same person giving the test on more than one occassion
*important for development of test |
|
|
Term
|
Definition
-When same test is given over time, should get similar results
-Given then retested two weeks later
*Done in developmental stages of test |
|
|
Term
|
Definition
-2 sets of scores either increase or decrease with equal magnitude
-from the same group of subjects
-related in a positive or negative way
-looking at how reliable those two numbers are |
|
|
Term
Comparing Correlation Coefficients |
|
Definition
Positive Perfect Relationship: + 1.0
Negative Perfect Relationship: - 1.0
-we want a positive perfect relationship
- desirable results = 0.90
-If there is no relationship = 0
|
|
|
Term
|
Definition
- branch of psychology that deals with design, administration, and interpretation of quantitative data |
|
|
Term
|
Definition
-samples that cluster around the central value
1. Mean: average
2. Median: middle
3. Mode: most often |
|
|
Term
|
Definition
-numerical index, describes dispersion of a set of scores around the mean |
|
|
Term
|
Definition
-used in test interpretation
SD = √varience |
|
|
Term
|
Definition
- # of correct items
EXCEPT in Artic tests = # of Artic Errors |
|
|
Term
|
Definition
-all items on a test are passed just before the 1st failure
-all items below basal are considered correct
-don't have to give that part, saves time
First set to have no error is the basal. |
|
|
Term
|
Definition
- highest item of a sequence in which a certain number of items has been failed
*different for each tests |
|
|
Term
|
Definition
- % of subjects or scores in the reference group who fall at or below a particular raw score |
|
|
Term
|
Definition
-types of standard score based on range of 1-20
-mean is 10
-SD = + or - 3
-even incriments
8-12 is average |
|
|
Term
|
Definition
- used for smaller dispersements
Mean= 0
SD= + or - 1 |
|
|
Term
|
Definition
- Used for very tight scores
Ex. 100, 99.5, 99
Mean= 5
SD= + or - 10 |
|
|
Term
|
Definition
1/9 th of the range of the standard score of the distribution
-average is 5
-typically used for education testing
-SD is 2 |
|
|
Term
|
Definition
-age in years and months at which raw score compares to to average performance
-must be careful of applying |
|
|
Term
|
Definition
-grade at which raw score of average performance is 1/10 of grade |
|
|
Term
Standard Error of Measurement |
|
Definition
SEM: # or measure
-helps take into account performance/things that affect external factors
-how close true score is to observed score
-ONLY add/subtract to standard score
|
|
|
Term
|
Definition
-range of performance
68%= + or - 4
90%= + or - 7
95%= + or - 8 |
|
|