Term
|
Definition
Assesses a process or program to privide evidence and feedback to the program.
The comparison of an object of interest against a standard of acceptability. |
|
|
Term
|
Definition
Is an organized process foundedjupon the scientific method for investigating problems. It involves systematic progression through a series of necessary steps. |
|
|
Term
|
Definition
Referes to the consistencey, dependability, and stability of the measurement process.
An emperical estimate of the extent to which an instrument produces the same result (measure or score), applied once or two or more times. |
|
|
Term
|
Definition
Is the degree to which a test or assessment measures what it is intended to measure. Using an acceptable (valid) instrument increases the chance of measuring what was intended.
Ensuring a study is technically sound. |
|
|
Term
|
Definition
Are operational forms of a construct. They designate how the construct will be measured in designated scenarios.
A construct, cahracteristic, or attribute that can be measured orobserved. |
|
|
Term
What is a refereed journal? |
|
Definition
A journal which contains articles published after they have been reviewed by a panel of experts in the field or in a specific content area (peer reviewed). |
|
|
Term
Where are some of the places information can be found when conducting research? |
|
Definition
Indexes, abstracts, government documents, and computerized databases |
|
|
Term
What are the eight questions that can be asked when evaluating research in the literature? |
|
Definition
1. Was the purpose of the study stated?
2. Was the research question or hypothesis stated?
3. Were teh subjects in ht estudy described? Did the literature describe subject recuritment?
4. Was the design and location of the study described?
5. Were the data collection instruments described?
6. Did the presented results reflect the research question or hypothesis?
7. Were the conclusions relfective of the reserch design and data analysis?
8. Were the implications meaningful to the priority population? |
|
|
Term
What should be considered when collecting data? |
|
Definition
The method should be linked with the objectives and appropriate research questions. |
|
|
Term
What are some of the insturments that may be used for data collection/procedures used in process evaluation? |
|
Definition
Surveys, in-depth interviews, informal interviews, key informant interviews, direct observation, expert panel reviews, quality circles, protocal checklist, Gantt chart, behavior assessments, face-to-face interviews, or focus groups |
|
|
Term
What are six steps in the development or evaluation of an existing instrument? |
|
Definition
1. determine the purpose and ofjectives of the instrument
2. review existing instruments
3. conduct and early review with colleagues
4. conduct a review with a panel of experts
5. pilot test the instrument with an appropriate sample population
6. revise the instrument based on the steps listed above |
|
|
Term
What is formative evaluation? |
|
Definition
Looks at the ongoing process of evaluation while the program is being developed and implemented.
Any combination of measurements obtained and judgements made before or during the implementation of materials, methods, activities, or programs to control, assure or improve the quality of perfromance or delivery.
|
|
|
Term
Define process evaluation |
|
Definition
Any combination of measurements obtained during the implementaion of program activities to control, assure, or improve the quality of performance or delivery. |
|
|
Term
Define summative evaluation |
|
Definition
Associated with quantitative processes. Also commonly associated with impact and outcome evaluation.
Any combination of measurement and judgements that permit comclusions to be drawn about impact, outcome or benefits of a program or method. |
|
|
Term
What are four other common evaluation models besides formative and summative? |
|
Definition
Decision-Making Model, Systems Analysis, Accreditation, and Goal Free |
|
|
Term
What is the purpose or design of a data collection instrument? |
|
Definition
To answer the questions that are being asked by the researcher or evaluator. |
|
|
Term
What is the purpose of data collection instruments? |
|
Definition
To gather data that will describe, explain, and explore a target population in a uniform or standardized fashion. |
|
|
Term
What are three things that sould be considered when determining an instruments validity? |
|
Definition
content, criterion, and construct validity |
|
|
Term
What is another name for content? |
|
Definition
|
|
Term
What is content or face validity? |
|
Definition
Considers the instrument's items of measurement for the relevant areas of interest.
The assessment of the correspondence between the items composing the insturment and the content domain from which the items were selected.
If, on the face, the measure appears to measure what it is supposed to measure. |
|
|
Term
What is Criterion Validity? |
|
Definition
Refers to one measure's correlation to another measure of a particular situation.
The extent to which data generated form a measurement instrument are correlated with the data generated form a measure (criterion) of the phenomenon being studied, usually an individual's behavior or performance. |
|
|
Term
What is construct validity? |
|
Definition
Ensures that the concepts of an instrument relate to the concepts of a particular theory.
The assessment of the correspondence between the items composing the insturment and the content domain from which the items were selected. |
|
|
Term
What are four things health educators should do when developing data-gathering instruments? |
|
Definition
1. Develop instrument specidications
2. Develop instructions for implementing the instrument and examples of how to complete items
3. establish item scoring procedures
4. Conduct an item analysis and reliability and validity tests |
|
|
Term
What two categories can research methods and designs be divided into? |
|
Definition
Quantitative and Qualitative |
|
|
Term
What is quantitative research associated with? |
|
Definition
Experimental Research
When testing a hypothesis, experimental research is the most often used. |
|
|
Term
What are five examples of ways to collect qualitative research? |
|
Definition
observation, participant observation, document study, interviews, or focus groups |
|
|
Term
What should data collection methods ensure that they are designed to measure? |
|
Definition
Program objectives
They sould also match the needs, sample size, and resources. |
|
|
Term
What is the focus of a Quantitative evaluation? |
|
Definition
quantifying (#'s), or measuring, things related to the health education program |
|
|
Term
Descrive qualitative research. |
|
Definition
It is more descriptive in nature and attempts to answer questions with a deeper understanding of the program participants. |
|
|
Term
What is the purpose of an Institutional Review Broad (IRB)? |
|
Definition
To protect human subjects involved in research. |
|
|
Term
What are the three primary tasks in implementing a research design? |
|
Definition
1. Measurement
2. Use of a design
3. Analysis of the data |
|
|
Term
What is internal validity? |
|
Definition
Identifying the effects(outcome) as being attributable to the program and not to other factors related to the evaluation design.
OR
Control for all influences between the groups being compared in an experiment, except for the experimental group.
OR
Degree to which change that was measured can be attributed to the program under investigation. |
|
|
Term
What is external validity? |
|
Definition
The health educators ability to generalize the findings of a program.
or
Extent to which the program can be expected to produce similar effects in other populations. |
|
|
Term
What are some things evaluation data can be used for? |
|
Definition
1. Determine if program goals and objectives are met
2. Assess effectiveness of organizations, services, or programs.
3. Identify what changes have occured
4. Identify what led to changes |
|
|
Term
What are two types of evaluation? |
|
Definition
|
|
Term
What are two ways evaluation and research data can be analyzed? |
|
Definition
Descriptive analysis and Inferential analysis |
|
|
Term
Describe descriptive analysis/statistics. |
|
Definition
Aims to describe the group being studied.
or
Data used to organize, summarize, and describe characteristics of a group. |
|
|
Term
Describe inferential analysis/statistics. |
|
Definition
Gains knowledge about the sample that can be generalized to a similar population.
OR
Data used to determine relationships and causality in order to make genralizations or inferences about a population based on findings from a sample. |
|
|
Term
What other sources of data can results be compared to? |
|
Definition
Previous reports on the same priority population, online databases, and peer-reviewed articles. |
|
|
Term
What are the common ways data comparisions are presented? |
|
Definition
In tables, figures, bar or line graphs, and pie charts. |
|
|
Term
What is a evaluation or research report? |
|
Definition
The typical form of communication used to report the outcome of the plan set forht by the evaluation or research planners. |
|
|
Term
What are the typical parts of a evaluation or research report, in order? |
|
Definition
1. Introduction
2. Literature review
3. Methodology
4. Results
5. Conclusion, Recommendations, or Summary |
|
|
Term
What should be included in the introduction section of an evaluation or research report? |
|
Definition
1. Front matter- Title of the program, names of the evaluators or researchers, and date of the report
2. Executive summary
3. Explanation of the program's background
4. Problem addressed by the program |
|
|
Term
What should be included in the literature review section of an evaluation or research report? |
|
Definition
1. Explanation of relevant studies
2. Background for the study
3. Relate to the purpose of the study, hypothesis, and target population
4. Theoretical orientation |
|
|
Term
What should be included in the methodology section of an evaluation or research report? |
|
Definition
1. Describes how evaluation or research plan was done
2. Overview of procedures, subjects, and data-gathering insturments
3. Describtion of the data analsis plan |
|
|
Term
What should be included in a results section of an evaluation or research report? |
|
Definition
1. Evidence tested against teh stated hypotheses or research question
2. Presents the findings
3. Discussion of the findings |
|
|
Term
What should be included in the conclusions, recommendations, or summary section of an evaluation or research report? |
|
Definition
1. Conclusion- if analysis supports the hypothesis.
2. Recommendations- for future research and new research questions.
3. Summary- restates problems, procedures, and principle findings |
|
|
Term
Compare and contrast formative and summative evaluation.
(The terms process evaluation and formative evaluation are used interchangeabley and are usually synonymous) |
|
Definition
Formative is a process evaluation and summative is impact and outcome evaluation done once the program is completed.
Formative evaluaiot relates to quality assessment and program improvement. Summative evaluation measures program effectiveness. |
|
|
Term
Define non- experimental design. |
|
Definition
Use of pretest and posttest comparisons, or posttest analysis only, without a control group or comparison group. |
|
|
Term
Define Quasi-experimental design? |
|
Definition
Use of a treatment group and a nonequivalent comparison group with measurement of both groups. (usually used when randomization is not feasible) |
|
|
Term
Define experimental design? |
|
Definition
Random assignment to experimental and control groups with measurement of both groups. |
|
|
Term
What are four advantages of experimental design? |
|
Definition
1. Convenience
2. Replication
3. Adjustment of variables
4. Establishment of cause and effect relationships |
|
|
Term
List three disadvantage of experimental design? |
|
Definition
- Cost
- Inability to generalize results
- Securing cooperation
|
|
|
Term
What are two of the most critical purposes of program evaluation? |
|
Definition
- Assessing and improving quality
- Determining effectiveness
In basic sense, programs are evaluated to gain information and make decisions. |
|
|
Term
What are six reasons stakeholders may want programs evaluated? |
|
Definition
1. To determine achievement of objectives related to improved health status.
2. To improve program implementation.
3. To provide accountability to funders, community, and other stakeholders.
4. to increase community support for initiatives.
5. To contribute to the scientific base for cimmunity public helath interventions.
6. To inform policy decisions. |
|
|
Term
Define standards of acceptability? |
|
Definition
The minumum levels of performance, effectiveness, or benefits used to judge the value. Are typically expressed in the outcome and criterion components of a program's objectives. |
|
|
Term
What are some of the standards of acceptability used? |
|
Definition
1. Mandate (policies, statutes, laws) of regulating agencies ex. % of children immunized for school
2. Health status of the priority population ex. Rates of mortality & morbidity
3. Values expressed in the local community ex. Type of school curriculum expected
4. Satndards advocated by professional organizations ex. Passing scores, cetifications, or registration examinations
5. Norms established by research ex. Treadmill tests or % body fat
6. Norms established by evaluation of previous programs ex. Smoking cessation rates or weight loss expectations
7. Comparison or control groups ex. Used in experimental or quasi-experimental studies |
|
|
Term
Who created an evaluation framework to be used for public health programs in 1999? |
|
Definition
Centers for Disease Control and Prevention (CDC) 1999 |
|
|
Term
What are the six steps in the framework for program evaluation created by the CDC? |
|
Definition
1. Engage stakeholders
2. Describe the program
3. Focus the evaluation design
4. Gather credible evidence
5. Justify conclusions
6. Ensure use and share lessons learned |
|
|
Term
Who are the three primary groups of stakeholders? |
|
Definition
1. Those involved in the program operations
2. Those served or affected by the program
3. The primary users of the evaluation results |
|
|
Term
In the framework for program evaluation there are four standards of evaluation what are they? |
|
Definition
1. Utility standards ensure that information needs of evalation users are satisfied.
2. Feasibility standards ensure that the evaluation is viable and pragmatic.
3. Propriety standars ensure taht the evaluation is ethical.
4. Accuracy standards ensure that the evaluation produces findings that are considered correct. |
|
|
Term
What are some of the problems/obstacles health educators may face in evaluating a program? |
|
Definition
1. Planners either fail to build evaluation in the program planning process or do so too late.
2. Resources may not be available to conduct an appropriate evaluation.
3. Organizational restructions on hiring consultants and contractors.
4. Effects are often hard to detect because changes are sometimes small, come slowly, or do not last.
5. Length of time allotted for the program and its evaluation.
6. Restrictions that limit the collection of data form those in the priority population.
7. It is sometimes difficult to distinguish between cause and effect.
8. It is difficult to separate the effeects of multistrategy interventions, or isolating program effects on the priority population from real world situations.
9. Conflicts can arise between professional standards and do-it-yourself attitudes with regard to appropriate evaluation design.
10. Sometives people's motives get in the way.
11. Stakeholders' perceptions of the evaluation's value.
12. Intervention strategies are sometimes not delivered as inteded, or are not culturally specific.
|
|
|
Term
When should you should you plan the evaluation of a program? |
|
Definition
At the beginning of or early stages of the program development. |
|
|
Term
What does developing a summative evaluation plan at the beginning of the program help reduce in the results? |
|
Definition
|
|
Term
What is internal evaluation? |
|
Definition
Evaluation conducted by one or more individuals employed by, or in some other way affiliated with, the organization conducting the program. |
|
|
Term
What are the advantages of an internal evaluation? |
|
Definition
1. Being more familiar with the organization and the program history.
2. Knowing the decision-making style of those in the organization.
3. Being present to remind others of results now and in the future
4. Being able to communicate technical results more frequently and clearly. |
|
|
Term
What are disadvantages of an internal evaluation? |
|
Definition
1. Bias or conflict of interest
2. Lack of objectivity |
|
|
Term
Define external evaluation |
|
Definition
Evaluation conducted by an individual or organization not affiliated with the organization conducting the program. |
|
|
Term
What are the advantages of using an external evaluator? |
|
Definition
1. Can often provide a more objective review and a fresh perspective.
2. Can help to ensure an unbiased evaluation outcome.
3. Brings a global knowledge of evaluatation having worked in a variety of settings.
4. Typically brings more breadth and depth of technical expertise. |
|
|
Term
What are disadvantages of using an external evaluator? |
|
Definition
1. Isolated, lack knowledge and experience of the program.
2. Costs more money |
|
|
Term
Differentiate between quantitative and qualitative methods of research. |
|
Definition
Quantitative is deductive in nature (applying a generally accepted principle to an individual case), so that the evaluation produces numeric data, such as counts, ratings, scores, or classifications. This method is good for well defined and programs. It compeares outcomes of programs with those of other groups or the general population. It is used most for evaluation designs. It measures level of occurence, provide proof, and measure levels of actions and trends.
Qualitative is an inductive method (individual cases are studied to formulate a general principle), and produces narrative data, such as descriptions. It is best used for programs that emphasize individual outcomes or in cases where other descroptive information from participants is needed. It provides depth of understandding, study motivation, enavel discovery, are exploratory and interpritive, and allow insights ointo behavior and trends. |
|
|
Term
What are the different types of methods used for qualitative evaluation? |
|
Definition
Case studies, content analysis, delphi techniques, elite interviewing, ethnographic studies, films, photographs, videotape recording, focus group interviewing, Kinesics, nominal group process, participant-observer studies, quality circle, unobtrusive techniques |
|
|
Term
Differentiate between experimental design, quasi-experimental design, and nonexperimental design. |
|
Definition
Experimental design offers the greatest contro over the various factors tahtmay influence the results. It used random assingment to experimental and control groups with measurement of both groups. It produces the most interpretable and supportive evidence of program effectiveness.
Quasi-experimental design results in interpretable and supportive evidence of program effectiveness, but usually cannot control for all factors that affect the validity of the results. There is no random assignment to groups, and comparisions are made on experimental and comparison groups.
Nonexperimental design without teh use of a comparison or control group, thas little control over the factors that affect the validity of the results. |
|
|
Term
What are some of the treats to internal validity? |
|
Definition
History- event happens between the pretest and postest not part of the program. i.e. nataion nonsmoking campaign in the middle of state campaign.
Maturation- growing older, stronger or wiser between pre & post tests.
Testing- familiarity with the test
Instrumentation- change in measurement of the test.
Statistical regression- high or low scores on the pretest are closer to the mean or average scores on the posttest.
Selection- difference in experimental and comparison groups, lack of randomization.
Mortality (attrition)- drop outs
Diffusion or imitation of treatments- control group interacts and learns from the experimental group.
compensatory equalization or treatments- when services or program is not available to the control group. They complain and want the program.
Compensatory rivalry- Control group seen as an underdog therefore being motivated to work harder.
|
|
|
Term
What are threats to external validity? |
|
Definition
Social desirability, expectancy effect, Hawthorne effect, placebo effect |
|
|