Term
| Briefly explain how experiments provide the types of evidence required to establish causality: association, directions of influence, and elimination of rival hypothesis? |
|
Definition
| Evidence of association is demonstrated by differences among experimental conditions on measures of the dependent variable: a statistically significant difference shows that the independent variable, in terms of which the experimental conditions differ, is related to the dependent variable. Direction of influence is established by the ordering of events in an experiment: the manipulation of the independent variable always occurs before the measurement of the dependent variable. Plausible rival explanations are eliminated by randomization, which controls for characteristics that subjects bring to the experiment, and by the constancy of conditions other than the manipulation of the independent variable, which controls for extraneous factors during the course of the experiment. |
|
|
Term
| What purpose does a test of statistical significance serve in an experiment? |
|
Definition
| Tests of statistical significance indicate whether the results—the observed differences among experimental conditions—are likely to have occurred by chance. |
|
|
Term
| Should matching be substituted for random assingment in an experiment? |
|
Definition
| No. Although matching creates experimental conditions that are similar on the matched characteristics, the conditions may still differ in the distribution of other, unmatched characteristics unless subjects are also randomly assigned. |
|
|
Term
| Differentiate random assignments from random sampling. |
|
Definition
| Sampling refers to the selection of cases for a study; random sampling indicates that every case has an equal chance of being selected. Once subjects are selected for an experiment, which rarely involves random sampling, they must be randomly assigned to the conditions of the experiment. Thus, random sampling is a method of drawing a sample of cases, such as the pool of subjects in an experiment, whereas random assignment is a method of assigning subjects from the pool to experimental conditions. |
|
|
Term
| Briefly distinguish between internal and external validity. |
|
Definition
| Internal validity refers to the validity of the study design—whether the study allows one to infer a causal relationship between the independent and dependent variables. External validity refers to the extent to which experimental results may be generalized to situations beyond the specific context of the experiment. |
|
|
Term
| Describe the typical sample of subjects in an experiment. |
|
Definition
| The typical sample of students in an experiment consists of college students who have “volunteered” to participate in exchange for a small payment or course “credit.” |
|
|
Term
| How do experimenters rationalize the convinence samples typical of experiments? |
|
Definition
| Experimenters argue that (1) differences in background characteristics such as age and education are likely to have little effect on subjects’ reactions, (2) sampling considerations are secondary to the primary aim of establishing the existence of a causal relationship, and (3) the generality of experimental results with regard to different subject populations can be demonstrated through replication. |
|
|
Term
| How can one increase the external validity of an experiment? |
|
Definition
| External validity is commonly increased by replicating the experiment while varying one or more features, such as the nature of the subject population, the setting, the experimental manipulation, or the measurement of the dependent variable. |
|
|
Term
| What are the four main parts to an experiment? |
|
Definition
| The four main stages of an experiment are (1) introduction to the experiment, (2) manipulation of the independent variable, (3) measurement of the dependent variable, and (4) debriefing and/or postexperimental interview. |
|
|
Term
| What is the purpose of a cover story? |
|
Definition
The cover story—a plausible false explanation of the nature of the experiment—is designed to deceive subjects about the true intent of the experiment so that they will not be preoccupied with guessing the hypothesis or trying to be helpful by acting in accord with a presumed hypothesis.
|
|
|
Term
| Explain the problem of multiple meanings in an experimental manipulation. |
|
Definition
| Multiple meanings refers to the measurement problem that occurs when an experimental manipulation is open to a variety of interpretations. In general, the more complex the experimental situation, the more likely that subjects’ interpretations of the situation will vary and differ from the experimenter’s intended meaning. |
|
|
Term
| What is the purpose of manipulation checks? |
|
Definition
| Manipulation checks assess the validity of experimental manipulations by determining if they are appropriately experienced or interpreted by subjects. |
|
|
Term
| Why are behavioral measures generally preferred over self-report measures of the dependent variable in experiments? |
|
Definition
| Behavioral measures are less likely to be contaminated by subjects’ self-censoring of responses. Also, if overt behavior is the object of study, it is better to measure it directly rather than obtain an indirect self-report of how subjects say they will behave. |
|
|
Term
| What purpose does debriefing serve? |
|
Definition
| When deception is used, debriefing serves to inform subjects about the nature of and reasons for the deception. It is also a time to explain the experiment’s true purpose and importance, to learn about subjects’ thoughts and reactions during the experiment, and to convince subjects not to tell others about the experiment |
|
|
Term
| When is the experiment is high in experimental realsim? When is it high in mundane realism? |
|
Definition
| An experiment is high in experimental realism when it has an impact on subjects, so that they pay careful attention, regard the situation seriously, and feel involved rather than detached. An experiment is high in mundane realism when the setting and events of the experiment are similar to everyday experiences. |
|
|
Term
| What is the difference between an impact experiment and a judgement experiment? Which is higher in experimental realism? |
|
Definition
| In a judgment experiment, subjects make judgments from materials provided by the experimenter, whereas in an impact experiment, which is higher in experimental realism, subjects directly experience the manipulation. This difference would be reflected, for example, in reading a description of a stimulus person as opposed to actually interacting with a stimulus person. |
|
|
Term
| Why is it important to consider the social nature of an expeirment? |
|
Definition
| It is important to consider the social nature of an experiment because the motives and expectations of subjects and their interaction with the experimenter may have as much to do with how subjects respond as do the experimental manipulations. |
|
|
Term
| Describe the role expectations of the typical experimental subject. |
|
Definition
| Subjects in an experiment agree to place themselves under the control of the experimenter and to carry out assigned tasks unquestioningly. |
|
|
Term
| Briefly describe Orne's model of the good subject. How is the model related to the problem of demand characteristics? |
|
Definition
| The “good” subject believes in the value of the research, willingly complies with all instructions and requests, and hopes to help out the experimenter by acting so as to validate the experimental hypothesis. Such motives may heighten subjects’ sensitivity to demand characteristics, so that the latter accounts for their actions more than the intended experimental manipulation. |
|
|
Term
| Explain the motives of the anxious subject and the bad subject? |
|
Definition
| The “anxious” subject is concerned about being evaluated, and therefore motivated to make a positive impression or at least avoid a negative one. The “bad” subject, out of hostility or disdain, is motivated to sabotage the research by providing useless or invalid responses. |
|
|
Term
Describe various ways in which the experimenter can affect the outcome of an experiment. |
|
Definition
| Experimenters can bias findings by making recording or computational errors, by falsifying data, or by inadvertently allowing personal characteristics or expectancies to affect subjects’ behavior. |
|
|
Term
| How are the experimenter expectancies communicated with subject? |
|
Definition
| Experimenter expectancies appear to be conveyed nonverbally—through facial expressions, gestures, voice quality, and tone of voice. |
|
|
Term
| How do experiments demonstrating experimenter expectancy effects differ from most other experiments? |
|
Definition
| Unlike most experiments, studies of experimenter effects (1) tend to involve highly ambiguous tasks with (2) experimenters running subjects in only one condition. |
|
|
Term
| Identify two methods for minimizing demand characteristics and two methods for reducing experimenter effects. |
|
Definition
| The effects of demand characteristics may be minimized by (1) pretesting to identify subjects’ perceptions of demand characteristics, (2) using a cover story to satisfy subjects’ suspicions about the purpose of the experiment, (3) increasing experimental realism, (4) physically separating the settings for the experimental manipulation and the measurement of the dependent variable, (5) conducting the experiment in a natural setting in which subjects are unaware that an experiment is taking place, and (6) asking subjects to play the role of “faithful” subject. Experimenter effects may be reduced by (1) keeping both subject and experimenter blind to which condition a subject is in (the “double-blind technique”), (2) using two or more experimenters, each of whom is blind to some part of the experiment, (3) conducting a single experimental session for all subjects, and (4) conveying instructions through audio or videotapes. |
|
|
Term
| What are the advantages and disadvantages of using a live experimenter rather than a taped recording to provide instructions to a subject? |
|
Definition
| Compared to a tape recording, a “live” experimenter will enhance experimental realism but may introduce variation in the manipulation or experimental conditions through inadvertent nonverbal cues. |
|
|
Term
| Compare the advantages and disadvantages of a field expeirment versus laboratory experiment. |
|
Definition
| Compared to laboratory experiments, field experiments are higher in mundane realism and external validity, often completely eliminate the effects of demand characteristics, and are more amenable to applied research. On the other hand, field experiments usually offer less control than laboratory experiments, so that they may only approximate a true experimental design, and often cannot incorporate standard ethical safeguards of subjects’ rights such as informed consent and debriefing. |
|
|
Term
| How can the experimental approach be incorporated into survey research? |
|
Definition
| Experimentation may be carried out in conjunction with surveys by systematically varying the wording of questions contained in a questionnaire or the factors presented in decision-making vignettes, and by creating different sets of questionnaires. |
|
|
Term
| Do the units of analysis in experiments always consist of individuals? |
|
Definition
| No. It is common for experiments to involve dyads (pairs of individuals) or larger groups of individuals. The text cites an example of a study in which neighborhoods were the units of analysis. |
|
|
Term
| What is the basic principle of a good design? |
|
Definition
| The basic principle of good design is allowing only one factor to vary at a time while controlling for all other factors. |
|
|
Term
| What is meant by threats to validity in a research design? |
|
Definition
| Experimental designs are valid to the extent that they offer sound evidence that the manipulated independent variable is the only viable explanation of observed differences in the dependent variable. Threats to validity refer to extraneous variables which, if uncontrolled, offer plausible alternative explanations of such differences. |
|
|
Term
| If history or some other threat to internal validity is present in an experimental design, then the possible effects of an extraneous variable are confounded with the __________ |
|
Definition
| effects of the independent variable |
|
|
Term
| Explain the difference between history and maturation effects; between testing and instrumentations effects. |
|
Definition
| History and maturation refer to factors that are concurrent with but extraneous to the experimental manipulation. History effects are events in the subjects’ environment that affect the experimental outcome; maturation effects are psychological or physiological changes in subjects that affect the outcome. Testing and instrumentation refer to factors that influence the measurement of the dependent variable. A testing effect occurs when pretesting affects subjects’ responses on the posttest; an instrumentation effect occurs when the method of measuring the dependent variable changes over time or differs across experimental conditions. |
|
|
Term
Under what circumstance is regresson toward the mean likely to be a threat to internal validity? |
|
Definition
Statistical regression poses a probable threat to internal validity when subjects are selected for a particular experimental condition because of their extreme scores on the dependent variable.
|
|
|
Term
| Which threats to internal validity are likely to be present in the a) one-shot case study, b)one-grp pretest-posttest design, c)static-group comparison? |
|
Definition
| (a) The primary threats to validity in the one-shot case study are history, maturation, and attrition. (b) In the one-group pretest-posttest design, the primary threats are history, maturation, testing, and instrumentation. (c) In the static-group comparison, the major threats are selection and differential attrition. |
|
|
Term
| Explain how the pretest-posttest control grp design adequately controls for each of the major threats to internal validity. |
|
Definition
| Random assignment in this design eliminates the effects of selection and statistical regression by making the experimental and control groups similar in composition. The presence of an experimental and a control group, both of which are pretested and exposed to the same general environment, means that the effects of history, maturation, and testing should be felt equally in both groups. Instrumentation is controlled provided that the measurement of the dependent variable is the same for both groups. Finally, the effects of differential attrition may be controlled by comparing the pretest scores of those subjects who drop out of each group. |
|
|
Term
| Explain why random assignment to experimental conditions can or cannot be used to rule out the following threats to internal validity: a)maturation b)history c)instrumentation d)selection e)statistical regression |
|
Definition
| Randomization rules out (d) selection and (e) statistical regression as threats to internal validity because it eliminates systematic differences between experimental conditions in the composition of subjects, which is the source of these validity threats. However, randomization does not affect events or processes that occur once subjects are assigned to conditions, which are the sources of (a) maturation and (b) history effects; nor does it affect the measurement of the dependent variable, which may produce an (c) instrumentation effect. |
|
|
Term
| What is the principle threat to external validity in a pretest-posttest control grp design |
|
Definition
| The principal threat to external validity is testing-X interaction, which means that the effect of the independent variable (X) may depend upon the presence of a pretest. |
|
|
Term
| Why is the posttest only control design generally preferred over the pretest-posttest control group design? |
|
Definition
| The main advantage is that it eliminates the possibility of testing-X interaction. It is also simpler and therefore more economical. |
|
|
Term
| The interaction of the indepedent variable with some other variable poses a threat to external validity in experiments. What are some solutions to problems of a)sample selection X interaction b)maturation X interaction c)history X interaction |
|
Definition
| (a) Selection-X interaction is minimized by using heterogeneous samples of subjects and is made less plausible by replicating an experiment with different subject populations. (b) Maturation-X interaction may be controlled by systematically varying conditions, such as the time of day, which could cause maturation effects, and may be checked by replicating an experiment under varying conditions. (c) History-X interaction also may be checked by replication, so that an experimental outcome is subject to different historical influences. |
|
|
Term
| What are some advantages and disadvantages of within subject designs? Why are they seldom used in research? |
|
Definition
| Within-subjects designs (1) require fewer subjects and (2) reduce the error associated with individual differences when different groups of subjects experience each experimental condition. On the other hand, participating in one experimental condition may affect how subjects respond to another, creating possible testing and order effects. Even though such effects can be controlled or estimated by counterbalancing, they cannot be eliminated. Therefore, within-subjects designs should be used with caution and not at all when it is highly likely that participating in one condition of an experiment will influence how subjects will respond to another. |
|
|
Term
| The Soloman four grp design may be viewed as a 2X2 factorial design. What are the factors and the levels of each factor in this design. |
|
Definition
| The Solomon four-group design contains two factors, each with two levels: the treatment (presence or absence) and the pretest (presence or absence). |
|
|
Term
| What are the principle advantages of factorial over nonfactorial experimental design |
|
Definition
| Factorial designs are more cost efficient, allow for the assessment of interaction effects, and enhance external validity by determining the effects of one variable under various conditions (represented by “levels” of other variables included in the factorial design). |
|
|
Term
| How do quasi-experiments differ from true experiments? |
|
Definition
| Quasi-experimental designs omit one or more features of true experimental designs, such as randomization, a control group, or the constancy of conditions. |
|
|
Term
| Give two examples of quasi-experimental designs and explain how eacch design controls for the major threats to internal validity. |
|
Definition
| The separate-sample pretest-posttest design uses separate groups for a pretest and posttest; if subjects are randomly assigned to the pretest and posttest conditions, this design controls for selection, and the use of separate groups eliminates testing and testing-X interaction. Nonequivalent control group designs lack randomization but include at least one control group; the more similar the comparison groups in recruitment and history, the more likely that this design controls effectively for history, maturation, testing, and regression. |
|
|
Term
| What are the three ways that rival explanations are ruled out in quasi experimental design? |
|
Definition
| Rival explanations are ruled out in quasi-experimental designs by (1) including special design features, (2) examining additional data that bear on specific threats, and (3) reasoning against the plausibility of particular validity threats. |
|
|
Term
| What are the three principle features of professional survey research? |
|
Definition
| (1) Large probability samples, (2) systematic questionnaire or interview procedures, and (3) computerized, quantitative data analysis. |
|
|
Term
| Give an example of a survey study in which the unit of analysis is noy the individual. |
|
Definition
| A survey study of campus drinking norms and policy might treat colleges and universities as units of analysis, perhaps interviewing key administrators such as the dean of students and director of fraternity affairs as well as campus opinion leaders. A study of the implementation of early retirement policies might use business organizations as units and involve interviews with company presidents or other key officials responsible for such policies. |
|
|
Term
| Contrast the objectives of unstructered, structurered, and semistructered interviews. |
|
Definition
| Unstructured interviews have very general and loosely defined objectives that allow interviewers considerable freedom in questioning; structured interviews have highly specific and well defined objectives, which are met through tight restrictions on the order and form of questioning; and semi-structured interviews have specific objectives, but allow some freedom with regard to the formulation of questions. |
|
|
Term
| What is the General Security Survey GSS? |
|
Definition
| The General Social Survey (GSS) is an omnibus personal interview survey of a national probability sample conducted since 1972 by the National Opinion Research Center. Until 1994, the survey was conducted annually (except for 1979, 1981, and 1992) with a sample of about 1500 respondents. Starting in 1994 the GSS shifted to biennial surveys with twice as many respondents. The objective of the survey is to provide high-quality data to the social science research community. |
|
|
Term
| Describe the kinds of questions that can be included in a survey? |
|
Definition
| Survey questions may include requests for social background information, reports of past behavior, statements of attitudes, beliefs, values, and behavioral intentions, and sensitive information. |
|
|
Term
| Dicuss the advantages and disadvantages of surveys in relation to experiments. |
|
Definition
| Relative to experiments, surveys generally can address a wider range of topics and collect substantially more information from much larger and more representative samples; thus, they are more flexible and more economical in that they can address several research questions at one time. On the other hand, surveys are less effective in testing causal relationships than experiments, and they are limited to self-reports of behavior, which not only are subject to self-censoring of responses but also cannot substitute for studies of overt behavior. |
|
|
Term
| What inherent weakness does the survey share with the laboratory experiment? |
|
Definition
| Surveys, like experiments, are susceptible to reactive measurement effects. |
|
|
Term
| What limitations of cross-sectional surveys is addressed by contextual designs and social network designs? |
|
Definition
| Contextual designs and social network designs provide direct information about interpersonal relations and social contexts—important objects of social research—whereas in cross-sectional surveys such information is limited by the extent and accuracy of individuals' reports about the people and groups with whom they interact. |
|
|
Term
| What is the difference between a trend study and a panel study? Which of these designs permits the assessment of individual change? |
|
Definition
Both trend studies and panel studies are longitudinal, that is, involve surveys of respondents at different points in time. Trend studies survey separate, independent samples of respondents, whereas panel studies survey the same respondents repeatedly over time. Only panel studies enable one to assess individual changes. |
|
|
Term
| What are the three kinds of influences examined in cohort anaylsis? |
|
Definition
Cohort studies examine influences due to age, historical period, and membership in a particular cohort.
|
|
|
Term
| Outline the major decision points in planning a survey. |
|
Definition
| The major decision points in planning a survey are (1) formulate research objectives, (2) review literature, (3) select units of analysis and variables, (4) develop sampling plan, and (5) construct survey instrument (4 and 5 are usually concurrent activities). |
|
|
Term
| What are the relative advantages and disadvantages of structured versus unstructured survey procedures? |
|
Definition
| Structured survey instruments reduce error and increase reliability but may adversely affect validity by dampening respondent motivation and by assuming that all respondents will interpret questions similarly. Unstructured interviewing facilitates exploratory research and may enhance validity; however, it requires more highly trained interviewers and more complex data analysis, and therefore greater cost per respondent. |
|
|
Term
| Which sampling design is likely to be used with face to face interviews? Why? |
|
Definition
| Unless the target population is highly concentrated geographically, face-to-face interview studies almost always involve multistage cluster sampling. This is the most cost-efficient sampling design in view of the time and travel required to reach respondents. |
|
|
Term
| Explain how interviews provide greater flexibility than self-administered questionnaires. |
|
Definition
| Interviews provide more flexibility by allowing the researcher (or interviewer) to clarify questions, to elicit more complete responses, to ascertain the order in which questions are answered, to use more varied question formats, and to reach respondents unable or unwilling to respond to a questionnaire. |
|
|
Term
| What particular problems are associated with face to face interviews? |
|
Definition
| Major problems with face-to-face interviewing are (1) high cost per respondent, (2) difficulty in reaching some respondents, (3) difficulties in supervising a widely dispersed staff of interviewers, and (4) response biases introduced by interviewers. |
|
|
Term
Relative to face to face interviewing, what advantages does telephone interviewing offer? |
|
Definition
| Relative to face-to-face interviewing, telephone interviewing is (1) substantially less costly and time-consuming, (2) much simpler in terms of staff supervision, and (3) easier for making call-backs to not-at-home respondents. |
|
|
Term
| Compare face to face interviews, telephone interviews, and self administered questionnaires with respect to a)respone rates and sampling qualitiy b)time and cost c)type- complexity and sensitivity- of questions asked. |
|
Definition
(a) Response rates and sample quality tend to be highest in face-to-face interview studies, slightly lower with telephone interviews, and much lower with mailed questionnaires. (b) Face-to-face interviews generally are much more costly and time consuming than the other survey modes, while telephone interviews generally cost more but take less time than mailed questionnaires. (c) Face-to-face interviews allow one to ask the most complex and sensitive questions; telephone interviews must ask questions simple enough for respondents to understand and retain while formulating an answer and, like mailed questionnaires, tend to yield shorter answers (and more nonresponses) to open-ended questions.
|
|
|
Term
| In what ways can computer assisted personal and telephone interviewing assist the interviewer? |
|
Definition
| CAPI prompts the interviewer with instructions and question wording in the proper order, skips questions not relevant to particular respondents, assures that the interviewer enters appropriate response codes for each question, and may even identify when respondents are giving inconsistent responses. In addition to this assistance, CATI may automatically sample and dial phone numbers, schedule callbacks, screen and select the person to be interviewed at each sampled phone number, record responses in a computer data file, and provide sampling and interviewing updates to supervisors. |
|
|
Term
| Under what conditions is a mail questionnaire surveys recommended? |
|
Definition
| A mail survey is recommended for specialized groups who are likely to have a high response rate, when a large sample is desired, when costs must be kept low, and when moderate response rates are tolerable. |
|
|
Term
| What are the strengths and weaknesses of Internet surveys? |
|
Definition
| Internet or Web surveys substantially reduce costs, including the costs of increasing sample size, require less time to carry out, and, like other computer-mediated methods, offer considerable flexibility in questionnaire design. On the other hand, they are subject to coverage error, as they can only reach those with access to the Internet, and early research indicates that response rates for Web surveys tend to be low, at least as low if not lower than mailed questionnaire surveys. |
|
|
Term
| Give an example of a mixed mode survey. |
|
Definition
| To increase the response rate in a readership survey of a Catholic diocesan newspaper, I mixed a mail survey with telephone interviews. Initially I sent a mail questionnaire survey to a random sample of subscribers; after a second follow-up to the mail survey, I conducted telephone interviews with those who failed to respond by mail. |
|
|
Term
| Outline the key steps in the field adminstration phase of survey research. |
|
Definition
Field administration of a survey entails (1) interviewer selection, (2) interviewer training and pretesting, (3) gaining access to respondents, (4) interviewing and interviewer supervision, and (5) follow-up efforts.
|
|
|
Term
| What qualities are desirable in an interviewer? |
|
Definition
| Interviewers should be neat and businesslike in appearance, articulate, tolerant, pleasant and cooperative, good listeners, show an interest in the survey topic, and be concerned about accuracy and detail. |
|
|
Term
| Describe the steps involved in interviewing training. |
|
Definition
| Interviewer training begins with a description of the survey and sample. Interviewers then should learn basic interviewing principles and rules, become acquainted with the interview schedule, and engage in supervised practice in using the interview schedule. |
|
|
Term
| Explain the purpose of pretesting? |
|
Definition
| The purpose of pretesting is to try out the survey instrument on persons similar to those in the target group in order to check for ambiguous questions, inappropriate responses options, and the like. (See chapter 10) |
|
|
Term
| What should a cover letter communicate to the respondent? |
|
Definition
| A cover letter should (1) identify the researcher and/or sponsor, (2) describe the general purpose of the study, (3) show how the findings may be of benefit, (4) explain how the respondent was selected, (5) assure confidentiality and/or anonymity, (6) indicate how long the questionnaire or interview will take to complete, and (7) promise to answer questions about or provide a summary of the study's findings. |
|
|
Term
| What are the principal arguments pro and con regarding the use of standardized interviewing procedures? |
|
Definition
| The principal argument in favor of standardization is that it reduces error associated with how interviewers ask questions and respond to respondent queries. Those who oppose strict standardization contend that it inhibits the interviewer’s ability to establish rapport and motivate respondents to respond fully and honestly and that it disregards the need to detect and correct communication problems such as the misinterpretation of questions. |
|
|
Term
| Describe some sources of measurement error in surveys attributable to a)interviewer and b)respondent |
|
Definition
| (a) Interviewers may affect responses and introduce error as a result of their physical characteristics, including race, sex, and age, and by conveying expectations to respondents about how to respond. (b) Respondents may distort or give false responses because of poor memory, desire to make a favorable impression on the interviewer, embarrassment, and dislike or distrust of the interviewer. |
|
|
Term
| What activities does the supervision of interviewers involve? |
|
Definition
| Interviewer supervision involves (1) distributing materials, keeping records, and paying interviewers, (2) overseeing the schedule of interviews, (3) collecting and checking interview schedules, (4) regularly meeting with interviewers, (5) being available to answer questions, and (6) sitting in on a few interviews. |
|
|
Term
| Why is it suggested that supervision and contact with interviewers by maintained throughout the interviewing period? |
|
Definition
| Maintaining supervision and contact provides a mechanism for motivating interviewers, boosting morale, and maintaining the quality of interviewing. |
|
|
Term
| What are follow up efforts necessary? At what point should they be abandoned in interview surveys? How many followup mailing typically are used in mail surveys? |
|
Definition
| Follow-up efforts are essential to produce adequate response rates—to make sure that as many of the sampled respondents are interviewed or questioned as possible. In the case of interview refusals, more than one follow-up should not be used. In the case of mailed questionnaires, three follow-up mailings is the norm; with special procedures such as certified mail sometimes invoked for the third mailing. |
|
|
Term
| What coginitive stephs are necessary to answer a survey question? |
|
Definition
| Respondents must (1) understand the literal and intended meaning of the question, (2) retrieve relevant information from memory, (3) formulate a response in accord with the question and the retrieved information, and (4) communicate a response deemed appropriate. |
|
|
Term
| Explains how the conversational principles of relevance and clarity could affect a respondent's answer to a survey question. |
|
Definition
| If respondents believe that interviewers will ask only clear (“clarity”) questions that are relevant (“relevance”) to their personal situation, they may feel pressure to provide prompt, but sometimes inadequate, responses rather than telling the interviewer that the questions are vague, ambiguous, or inappropriate. |
|
|
Term
| When are respondents likely to adopt a satisficing strategy in answering question? What are they likely to adopt an optimizing strategy? |
|
Definition
| Respondents are likely to expend the minimum effort (“satisficing”) when questions are difficult for them to answer and when their motivation level is low. Conversely, easily answered questions and high motivation are more likely to produce maximum (“optimizing) effort. |
|
|
Term
| Compare the advantages and disadvantages of open versus closed questions. When is it advisable to use open rather than closed questions? Why should open questions be used sparingly in self-administered questionnaires? |
|
Definition
By allowing respondents freedom in answering, open-ended questions can yield a wealth of information as well as clarify the researcher’s understanding in areas where it is not well developed. On the other hand, open-ended questions require more work—of respondents in answering, of interviewers in recording answers, and of the researcher in coding and analyzing responses—and may yield uneven responses due to respondent differences in articulateness, verbosity, or willingness to answer. Closed-ended questions require less effort from respondents and interviewers, provide response options that may clarify the question or make self-disclosure more palatable, and are easier to code and analyze. However, they also are difficult to develop, requiring considerable prior knowledge of respondents, may force respondents into choosing among alternatives that do not correspond to their true feelings, and may dampen respondents’ motivation by restraining spontaneity. Open-ended questions work best in the early stages of research, when less is known about respondents, and should be used when the survey objectives are broad, respondents are highly motivated, and respondents vary widely in their knowledge or prior thought about the issue. They should be used sparingly in self-administered questionnaires because writing takes more effort than speaking and because an interviewer is not available to use probes that ask for elaboration or clarification. |
|
|
Term
| Why do researchers resort to indirect questions? What special problems do they present to the researcher? |
|
Definition
| Indirect questions may be used when respondents are unable or unwilling to reveal certain characteristics or experiences directly to the researcher. Responses to such questions are difficult to code, often are open to various interpretations, and are ethically problematical. |
|
|
Term
| Is it considered unethical to borrow questions from previous research? Explain. |
|
Definition
| The use of questionnaire items and scales from previous research is considered good research practice and is not at all unethical unless one uses copyrighted material without permission. |
|
|
Term
| Describe some characteristics of a good opening question in an interview or questionnaire? |
|
Definition
A good opening question should be relatively easy to answer, interesting, and consistent with respondent expectations, so that it engages respondents’ interest and motivates them to complete the survey.
|
|
|
Term
| What purpose do transitions serve? |
|
Definition
| Transitions are designed to improve the flow of a survey, to enhance respondents’ understanding and motivation and refocus their attention by providing a brief rationale or description of the ensuing questions. |
|
|
Term
| As a general guide to writing items and organizing the entire survey instrument, what should you do before you begin to write individual questions? |
|
Definition
| Formulate your research objectives clearly before you begin to write questions. |
|
|
Term
| 5 common wording problems in constructing survey questions |
|
Definition
| 1. lack of clarity or percision 2. inappropiate vocabulary 3. double barreled questions 4. loaded word or leading questions 5. insensitive wording |
|
|
Term
How can a funnel sequence or inverted funnel sequence solve the survey researcher's frame of reference problem? |
|
Definition
| A funnel sequence establishes a frame of reference for specific questions by first asking general questions, which often are open-ended, and then moving progressively to more and more specific questions. The inverted funnel sequence reverses this sequence, asking more specific questions first, which form the frame of reference for a general opinion question. |
|
|
Term
| Suppose to want to know why students choose a specific major. Appluing reason analysis, construct a set of questions to find out the reason for the students choice of major. |
|
Definition
| The accounting scheme could contain some of the same elements as the scheme for determining why students select a particular school. Thus, we might ask the following questions: What is your major? When did you select this as your major? Did you switch from another major? If so, why? Were there any other areas of study in which you were interested? What especially appealed to you about this major? Is this major related to specific career interests? Did your instructors, friends, parents or anyone else help you decide on this major? Who? How much influence did they have? |
|
|
Term
| What are the two types of memory problems with which survey researchers must deal? Identify three ways of increasing accuracy of respondents recall. |
|
Definition
| The two problems are forgetting and distortion: respondents may be unable to recall information and/or may not recall it objectively. The most effective ways to increase the accuracy of recall are (1) providing a context for answering, such as by asking questions in life sequence, (2) providing lists, and (3) asking respondents to check records. |
|
|
Term
| How can one minimize the tendency to give socially desirable responses? |
|
Definition
| The tendency to give socially desirable responses may be minimized by carefully wording and placing sensitive questions, assuring anonymity and emphasizing scientific importance, making statements sanctioning less socially desirable responses, and building interviewer-respondent rapport. |
|
|
Term
| Identify two methods of avoiding acquiescence and positional reponse sets. |
|
Definition
| Response sets may be avoided by (1) clearly spelling out the content of response options rather than using simple agree-disagree categories, and (2) varying the arrangement of questions and the manner in which they are asked (e.g., writing attitude statements so that an “agree” response represents one end of the attitude continuum on some items and the opposite end on other items). |
|
|
Term
| What is the relation between a filter question and a contingency question? |
|
Definition
Contingency questions are questions intended for only part of the sample of respondents; filter questions determine who is to answer which of subsequent contingency questions.
|
|
|
Term
| What does it mean to pretest a survey instrument? What purpose does pretesting serve? |
|
Definition
| Pretesting a survey instrument involves trying it out on a small sample of respondents. It facilitates the revision and improvement of the instrument by identifying such problems as low response rates to sensitive questions, lack of variation in responses, item ambiguity, the appropriateness of response options to closed questions, and the analytical complexity of answers to open-ended questions. |
|
|
Term
| What is the difference between cognitive interviewing techniques and field pretesting? When is each technique used? What purposes and information does each technique serve? |
|
Definition
| Cognitive interviewing techniques are used first to diagnose question wording, ordering, and formatting problems in a draft survey instrument. Typically, small, unrepresentative samples of paid subjects are asked to verbalize their thought processes during (“thinkalouds”) or after (follow up probes, paraphrasing requests) answering each question being pretested. Then the revised survey instrument and personnel are field pretested under realistic interviewing conditions with a group of respondents similar to the target population for which the survey is designed. Field pretesting supplements cognitive interviewing techniques by identifying instrument problems associated with subgroups of diverse target populations, with interviewer behaviors, and with interviewer respondent interactions. |
|
|
Term
| Describe three primary methods of cognitive interviewing. |
|
Definition
| (1) In “thinkaloud” interviews, respondents are asked to think aloud, reporting everything that comes to mind, as they determine a response to pretest questions. (2) In the probing question technique, interviewers ask follow up probes to explore the respondents’ thought processes in formulating pretest question responses. (3) In paraphrasing follow ups, respondents are asked to summarize or repeat the question in their own words. |
|
|
Term
| Describe five primary methods of field pretesting. |
|
Definition
| (1) In behavioral coding, live or taped interviewer respondent interactions are systematically coded to identify the frequency of problematic respondent and interviewer behaviors on each question. (2) In respondent debriefings, structured follow up questions at the end of pretest interviews are used to identify instrument problems from the respondent’s perspective. Similarly, instrument problems from the interviewer’s perspective are obtained from (3) interviewer debriefings which usually involve focus group discussions. (4) In response analysis, the responses of pretest respondents are tabulated and examined for problematic response patterns. (5) Split panel tests are used to compare instrument versions by experimental manipulations of question ordering, wording, or formats. |
|
|