Term
1. What is truth? Reality? Knowledge |
|
Definition
Truth is the knowledge of reality. Truth consists of two premises. Premise 1- Reality: that which is real (is, was, and will always be) Premise 2- truth is the knowledge of that which is, was, and will always be Reality-what is, must be observable, perception (what we see) Knowledge- trial and error, experience, observation |
|
|
Term
List and briefly describe each of the six methods of knowing. |
|
Definition
1.Experience- trial and error 2.Tenacity- tradition 3.Authority- established beliefs 4.Intuition-agrees with reason, logic 5.Science- scientific method 6.Theology-revelation |
|
|
Term
3. List and briefly describe each of the four “everyday errors” that often occur when we rely on personal experience for our knowledge. How does social science address these “everyday errors” (p.4-8) |
|
Definition
1. overgeneralization by using systematic procedures for selecting individuals/groups to study so that the study subjects are representative of the individuals as a group to which we wish to generalize. 2.selective/inaccurate observations by requiring that we measure sample phenomena systematically. 3. illogical reasoning by using explicit criteria for identifying causes and for determining whether these criteria are met in a particular instance 4. resistance to change by using the scientific method to lessen the tendency to answer questions about the social world from ego-based commitments, excessive devotion to tradition, and/or unquestioning respect for authority. Social scientists insist: Show us the evidence! |
|
|
Term
4. According to Chambliss and Schutt, what are the four main goals of social research? Under what circumstance, do we use descriptive research, exploratory research, explanatory research, and evaluation research? (p.8-11) |
|
Definition
Descriptive research- Used to describe and define social phenomena Exploratory research- Find out how people get along in a given situation. Meanings for actions and what concerns them. Usually qualitative, words rather than numbers. Seeking Information. Explanatory research- Identifying and testing cause and effect. Evaluation research- Determining impact of social programs. |
|
|
Term
|
Definition
It is an objective, accurate, systematic analysis of a determinant body of empirical data, in order to discover recurring relationships among phenomena. |
|
|
Term
6. List and briefly describe the five major norms of the scientific community. |
|
Definition
1. universalism: source of discovery makes no difference, the discovery is what matters 2. organized skepticism: challenge and question everything 3. disinterest: scientists must be neutral 4. communalism: knowledge must be shared with others 5. honesty: in all research |
|
|
Term
7. What are the two main goals of science and what questions do they ask? |
|
Definition
1.Description- what is out there? what is it? 2. Explanation- why? (brings meaning to the description) What is a scientific knowledge claim? How do scientific knowledge claims relate to truth? (lecture) There is no truth because we can’t observe everything at all times, so we have knowledge claims |
|
|
Term
1. What makes a research question “good”? List and briefly explain each of the three criteria used to evaluate research questions. (p.22-23) |
|
Definition
• Scientific Relevance--Where does it fit in the literature and theory • Importance (Salience)--Will it make a difference • Feasibility-- is it possible with the resources and time available |
|
|
Term
2. What is social theory? (p.23) What is the role of theory in the research process? (p.23) |
|
Definition
• “A social theory is a logically interrelated set of propositions about empirical reality” (23) • From Lecture “A set of interrelated constructs (concepts), definitions, and propositions that presents a systematic view of phenomena by specifying relations among variables with the purpose of explaining and predicting the phenomena.” |
|
|
Term
3. What are the major steps of the research process |
|
Definition
1. Identify a topic / issue 2. Develop a research question 3. Theory 4. Deduction 5. Hypothesis 6. Observations 7. Analysis 8. Induction 9. Interpret data 10. Inform Others |
|
|
Term
4. What is deductive research? (p.25-28) Inductive research? (p.28-30) What role does inductive reasoning play in deductive research? (p.29) How are deductive and inductive research linked to theory? (p.25) |
|
Definition
• Deductive Research- “The type of research in which a specific expectation is deduced from a general premise and is then tested” --so start with a theory and find the data to support it • Inductive Research- “The type of research in which general conclusions are drawn from specific data” --start with data and then develop a theory that explains the phenomenon. • Inductive Reasoning “The type of reasoning that moves from the specific to the general” This is used in deductive research “when we find unexpected patterns in data collected” • As we go through the research cycle (Theory-->Hypothesis-->Data-->Empirical generalizations) we also go through cycles of using deductive and inductive research. The theory and hypothesis allow us to deduce but the the data and empirical generalizations force us to induce information from what we have gathered. |
|
|
Term
7. What is a dependent variable? (p.27) What is an independent variable? (p.27) What is an intervening variable? How do we determine whether a variable is a dependent variable or an independent variable? (book and lecture) |
|
Definition
Dependent variable- What we are trying to explain (everything else affects it) Independent variable-all other factors/ variables (causely prior to dependent variable) Intervening variable- independent variable that has a special role, it says why dependent variable changes; it also impacts the effects that all other independent variables/controls have on the dependent variable. Therefore, it is the deterministic independent variable |
|
|
Term
8. What is cross-sectional research? (p.31) What is longitudinal research? (p.31) What are their major advantages and disadvantages? (p.32-33) How are they different? (p.31-33) (book and lecture) |
|
Definition
Cross-sectional research-Measures characteristic of a population at one time (like a snapshot) Advantage-relatively inexpensive, describes population disadvantage-don’t know who or what changed or what happens next, can’t study change effectively Longitudinal- A study in which data are collected that can be ordered in time; also defined as research in which data are collected at two or more points of time. Advantages-can determine the change, you can see if a cause occurs and then, later in time, an effect occurs, know what, who and usually “why” they changed Disadvantages- more expensive, time-consuming |
|
|
Term
9. What is retrospective memory? What are the limitations of retrospective memory in surveys? (lecture |
|
Definition
Retrospective Memory-time and space, recalling of events/things/anything that have happened. It’s a label we give to asking people questions, it’s us remembering our most important memories. limitations- people can’t remember all the details, and don’t even ask people to recall “how they felt” because people will not accurately describe their feelings/emotions. If you ask people how they felt during an experience, they will take how they feel now and apply it to the past (telescoping). |
|
|
Term
10. List and describe the three kinds of longitudinal research designs: classical experimental design/true experiment (p.135-136), trend studies (p.34-35) and panel studies (p.35-36). Describe the two special types of trend and panel studies: cohort studies (p.36-37), follow-up studies. |
|
Definition
classical experimental design/true experiment- -consists of 2 comparison groups -variation in the independent variable before assessment of change in the dependent variable, which establishes time order -includes random assignment to the two (or more) comparison groups, which establishes nonspuriousness Advantages: the researcher knows who changed, what changed, when they changed, and why they changed Trend Studies- (also known as repeated cross-sectional studies) -a sample is drawn from a population at Time 1, and data are collected from the sample -as time passes, some people leave the population and others enter it -at Time 2, a different sample is drawn from population -Strengths: researcher knows what and when change occured, not why or who Panel Studies -a sample, called a panel, is drawn from a population at Time 1, and data are collected form the sample (for instance, 100 freshman are selected and interviewed) -As time passes, some panel members become unavailable for follow-up, and the population changes (some students transfer to other colleges or decline to continue participating) -At Time 2, data are collected from the same people (and panel) as at Time 1, except for those people who cannot be located (the remaining students are reinterviewed). -Strengths: researcher knows who, when, what changed, and usually why Two special types of trend/panel studies = cohort (see below) and follow-up studies. Follow-up: Starts from records, find ppl, gather them up and get info from them, these are often used for medical treatments for cancer. Cohort Studies: a longitudinal study in which data are collected at two or more points in time from individual cohorts. (Cohorts being a shared life experience) Examples: Birth cohort: those who share a common period of birth (those who were born in the 1950s, 1960s etc.) Seniority Cohort: Those who have worked in the same place for about 5 years, 10 years, and so on School Cohort: freshman, sophomore, junior, senior |
|
|
Term
11. What is panel attrition? What are the six main sources of panel attrition? What is subject fatigue? (p.35) |
|
Definition
Panel attrition-losing people from the selected panel (3 causes due to respondent and 3 causes due to researcher(s)) 1. respondent’s refusal to participate 2. respondent’s inability to participate (too ill etc.) 3. mortality 4. researchers failure to relocate respondent(s) 5. researchers failure to keep accurate records 6. researchers failure to interview respondent Subject fatigue: when panel members grow weary of repeated interviews and drop out of the study, or they become so used to giving answers to standard questions in the survey that they give stock answers rather than actually thinking about their current feelings or actions. |
|
|
Term
12. What is a Units of Analysis? (p.37) Why is this choice important? What is the ecological fallacy? (p.38) The reductionist fallacy? (p.40) |
|
Definition
Units of Analysis: The level of social life on which a research question is focused, such as individuals, groups, towns, or nations This choice is important because it determines the depth and type of questions to be asked, and who will be answering them, how to approach them, and so forth. (Seems too obvious??) Ecological Fallacy- where you apply group behavior and attribute it to an individual. (from a group to one individual) Reduction Fallacy- where you take data from an individual and apply it to a group (from one individual to a group) |
|
|
Term
What are ethics? Why should a researcher be ethical? |
|
Definition
ethics- written statements about what people find acceptable or not. |
|
|
Term
. What is an IRB? (p.43) What is the major purpose of an IRB? What is research risk? |
|
Definition
IRB- Internal Review Board a group of organizational and community representatives required by federal law to review the ethical issues in all proposed research that is federally funded, involves human subjects, or has any potential to develop a realistic risk/benefit assessment. |
|
|
Term
3. List and briefly describe the three “human rights” according to the Belmont Report. (p.52-53) Are ethics relative? What is a code of ethics? What is the Code of Ethics of the ASA? (p.54-55) |
|
Definition
Belmont “Human Rights” 1. respect for persons-treating persons as autonomous agents and protecting those with diminished autonomy 2. Beneficence-minimizing possible harms and maximizing benefits 3. Justice-distributing benefits and risks of research fairly. Code of ethics of the ASA- 1. to protect research subjects 2. to maintain honesty and openness 3. to achieve valid results 4. to encourage appropriate application |
|
|
Term
4. Briefly describe the four ways researchers can protect research participants. (p.55) |
|
Definition
1. Avoid harming research participants 2. Obtain informed consent 3. Avoid deception in research, except in limited circumstances 4. Maintain privacy and confidentiality |
|
|
Term
5. What is privacy? Is it a right? What is anonymity? (p.184-185) What is confidential information? (p.42) What is the difference between anonymity and confidentiality? Is confidential data collected by sociologists really confidential? (p.65) |
|
Definition
Privacy-not a right, not a common meaning for everyone, personal space/information Anonymity-where no identifying information is ever recorded to link respondents with their responses. Confidential-You have someone’s information linked to their name, but will not disclose who had what response. Difference between confidential and anonymity is with anonymity you don’t know who responded, with confidentiality you do, you know who responded, but you keep their identity a secret |
|
|
Term
5. What is privacy? Is it a right? What is anonymity? (p.184-185) What is confidential information? (p.42) What is the difference between anonymity and confidentiality? Is confidential data collected by sociologists really confidential? (p.65) |
|
Definition
Privacy-not a right, not a common meaning for everyone, personal space/information Anonymity-where no identifying information is ever recorded to link respondents with their responses. Confidential-You have someone’s information linked to their name, but will not disclose who had what response. Difference between confidential and anonymity is with anonymity you don’t know who responded, with confidentiality you do, you know who responded, but you keep their identity a secret |
|
|
Term
5. What is privacy? Is it a right? What is anonymity? (p.184-185) What is confidential information? (p.42) What is the difference between anonymity and confidentiality? Is confidential data collected by sociologists really confidential? (p.65) |
|
Definition
Privacy-not a right, not a common meaning for everyone, personal space/information Anonymity-where no identifying information is ever recorded to link respondents with their responses. Confidential-You have someone’s information linked to their name, but will not disclose who had what response. Difference between confidential and anonymity is with anonymity you don’t know who responded, with confidentiality you do, you know who responded, but you keep their identity a secret |
|
|
Term
6. When a subject discloses information to a researcher or someone else, whose property is it? Why? What ethical concerns does this create? How is information used in our society? How does the “information age” affect individual privacy? |
|
Definition
Whoever you give the information to owns it. |
|
|
Term
7. List and briefly describe the five ways that the sponsorship of research poses special problems for ethical research. |
|
Definition
1) Whistle blowing—if you are involved with unethical research you have a responsibility to let people know (but if you are sponsored your funding might be cut) 2) Arriving at particular findings 3) Limits how to conduct studies 4) Suppressing findings 5) Concealing the true sponsor (this should always be disclosed) |
|
|
Term
8. What does value-free science mean? Is the scientific method really objective and “value-free”? (lecture) |
|
Definition
• Science that is free from any prior assumptions, theoretical stance free of influence from a researcher’s personal bias or beliefs |
|
|
Term
9. Discuss the ethical issues in the Millgram experiment, the Tearoom Trade study and the Tuskegee syphilis study. Contrast and compare each. (book and lecture) |
|
Definition
Ethical issues in the Millgram experiment, subjects were lied to, Milgram experiment-A series of famous experiments conducted during the 60s by Stanley Milgram, a psychologist (Yale), testing subject’s willingness to cause pain to another person if instructed to do so., participants experienced emotional and psychological trauma, there was no informed consent Tearoom Trade- Book by Laud Humphreys investigating the social background of men who engage in homosexual behavior in public facilities controversially, he did not obtain informed consent from his subjects. Tuskegee syphilis study- Research study conducted by a branch of the U.S. government, lasting about 50 years (ending in the 70s), in which a sample of African American men diagnosed with syphilis were deliberately left untreated without their knowledge, to learn about the lifetime course of the disease. |
|
|
Term
What is a concept? (p.74) |
|
Definition
• It is an Image or idea, not a simple object, that summarizes a set of similar observations, feelings, or ideas. |
|
|
Term
|
Definition
• A concept that has been deliberately invented for a purpose (school achievement is a function of intelligence and motivation) |
|
|
Term
Why are well-defined concepts necessary in social science research? (p.74-76) (book and lecture) |
|
Definition
The same concept may take on multiple meanings depending on the theories that it is based on. Clearly defining the concept will make research more accurate and reliable. |
|
|
Term
What is conceptualization? (p.75-76) |
|
Definition
The process of specifying what we mean by a term. It helps to translate portions of an abstract theory into testable hypotheses involving specific variables. |
|
|
Term
What is operationalization? |
|
Definition
p.78-79) The process of specifying the operations that will indicate the value of cases on a variable. An operational definition? • A definition that assigns meaning to a concept, construct or variable by specifying the activities or operations necessary to measure the construct, concept, or variable |
|
|
Term
What does it mean when response values are mutually exclusive? (p.82) |
|
Definition
variable’s attributes (or values) are mutually exclusive when every case can be classified as having only one attribute or (or value) Exhaustive? (p.82) (book) Every case can be classified as having at least one attribute (or value) for each variable |
|
|
Term
What are scales? (p.83-84 |
|
Definition
A composite measure based on combining the responses to multiple questions pertaining to a common concept after these questions are differentially weighted, such that questions judged on some basis to be more important for the underlying concept contribute more to the composite score. |
|
|
Term
What are indexes? (p.83-84 |
|
Definition
A composite measure based on summing, averaging, or otherwise combining the responses to multiple questions that are intended to measure the same concept How are they different? When an index is used, all the answers are weighted equally - in a scale they are not. Some questions will influence your results more than others because they are deemed more important. Why are they used? (book and lecture) |
|
|
Term
What is "content analysis"? (p.85) |
|
Definition
A research method for systematically analyzing and making inferences from text. What are the strengths and weaknesses of content analysis? (book and lecture) • Strengths/advantages o Does not involve human research subjects. o Inexpensive. o Can be used when researcher is prevented from surveying or observing the population being studied. o Computer assisted content analysis
Weaknesses/disadvantages o Locating messages relevant to the research question o Some topics do not appear regularly in the available media o It can’t be used to test causal relationships between variables o Selectivity--gatekeepers of media |
|
|
Term
What are unobtrusive measures (nonreactive research)? (p.86) Give examples. (p.86 |
|
Definition
A measurement based on physical traces or other data that are collected without the knowledge or participation of the individuals or groups that generated the data 4 types: 1) physical trace evidence 2) archives (available data) 3) simple observation, and 4) contrived observation (using hidden recording hardware or manipulation to elicit a response) |
|
|
Term
What is the purpose of triangulation? (p.87) |
|
Definition
is the use of two or more different measures of the same variable to strengthen the measurement. If the variables do not produce close to the same result then the conclusion will be weak. |
|
|
Term
What are the four basic levels of measurement? Be able to define and give examples of each. (p.88-93) (book and lecture) |
|
Definition
Nominal level of measurement: Variables whose values have no mathematical interpretation; they vary in kind or quality but not in amount.
Ordinal level of measurement: A measurement of a variable in which the numbers indicating a variable’s values specify only the order of the cases, permitting “greater than” and “less than” distinctions.
Interval level of measurement: a measurement of a variable in which the numbers indicating a variable’s values represent fixed measurement units but have no absolute, or fixed, zero point.
Ratio level of measurement: A measurement of a variable in which the numbers indicating the variable’s values represent fixed measuring units and an absolute zero point. |
|
|
Term
List and briefly describe the three major criteria for evaluating the adequacy of a measure. |
|
Definition
precise--how accurate is the measure and how large is the measurement error valid-- are we measuring what we think we are measuring reliable--if we measure teh same objects again and again with the same instrument, will we get the same or similar results |
|
|
Term
13. Be able to define and provide examples of the six major types of measurement validity in quantitative studies. (p.93-96) |
|
Definition
Face validity: an inspection of items used to measure a concept suggests that they are appropriate “on their face” or obviously pertains to the meaning of the concept being measured.
Content Validity: established by comparing the scores obtained on the measure being validated to those obtained with a more direct or already valid measure of the same phenomenon.
Concurrent validity: when scores on a measure are closely related to scores on a criterion measured at the same time.
Predictive validity: when a measure predicts scores on a criterion measured in the future.
Construct validity: is established by showing that a measure is related to other measures as specified in a theory
Criterion Validity: measures up with parameter values |
|
|
Term
List and briefly describe the four indicators of reliability. |
|
Definition
1. Test-retest 2. Inter-item reliability (scale) 3. Alternate forms (multiple measures) 4. Inter-observer reliability (qualitative studies) |
|
|
Term
List and briefly describe the four indicators of reliability. |
|
Definition
1. Test-retest 2. Inter-item reliability (scale) 3. Alternate forms (multiple measures) 4. Inter-observer reliability (qualitative studies) |
|
|
Term
15. List and briefly describe the four ways you can improve measurement reliability. |
|
Definition
Check operational definition-do you have all theoretically relevant components of a construc. 2. Increase level of measurement- (sometimes helps) make menu more precise 3. Use multiple indicators- 2 or 3 of the same thing = better than 1 4. Pretest the measure- Remove/exchange questions that produce inconsistent results. |
|
|
Term
2. Describe the sampling process. List and briefly describe all key components of the sampling process: |
|
Definition
population (p.107-108), What is the entire group we are researching? (geographic) sampling unit/elements (p.108), Select a group within the population sampling frame (p.108), This should contain every person possible sample (p.108), Select a random sample of the frame completed sample, coverage error, This means you had an error with your frame. sampling error, You selected your sample poorly. non-response error, too many people did not reply from a given group. parameter (p.192), refers to population statistic (p.192). refers to sample |
|
|
Term
3. Why is a sampling frame important in drawing a good sample? What information sources can be used to construct a sampling frame? What is a strength and weakness of each? |
|
Definition
Gallup did a good job of drawing a good sampling frame. He caught multiple social classes, ethnicity, gender, and income. Because of this, his survey was a more accurate representation of the people. Sources: Utility records: Wide range of people have utilities/difficult to access records and may only contain home owners. City Records: Get lots of data (unlisted phones)/out of date and only cover cities. Phone book: 98% of homes have phones/1/3 of home phones are unlisted. |
|
|
Term
. What is the probability sampling method (random sampling)? (p. 112) List and briefly describe the two errors that should be considered when drawing random samples. (p.113) What is a table of random numbers? (p.116) How is it used? |
|
Definition
Probability sampling method: Sampling method that relies on a random, or chance, selection method so that the probability of selection of population elements is known Two errors: 1) If sampling frame is incomplete, a sample selected randomly from that list will not really be a random sample of the population. You should always consider adequacy of sampling frame. Even for simple population like a university’s student body, the registrar’s list is likely to be at least somewhat out-of-date at any given time 2) Non-response is a major hazard in survey research, because nonrespondents are likely to differ systematically from those who take the time to participate. If the response rate is low, you should not assume that findings from even a random sample will be generalizable to the population A table of random numbers is is a table containing lists of numbers that are ordered solely on the basis of chance; it is used for drawing a random sample; how it is used: Researcher numbers all elements in the sampling frame and then uses a systematic procedure for picking corresponding numbers from the random number table. (Look under “Doing Research” section in back of chapter--exercise 1 has step by step procedure of this process) |
|
|
Term
Six Stages of Data Analysis |
|
Definition
Coding Data Entry Descriptive Analysis Data Cleaning Cross-Tab Analysis Testing Relationships |
|
|
Term
A GOOD Codebook contains: |
|
Definition
Introduction Map between Questionnaire and Data File Special Coding History of Study Copies of Surveys |
|
|
Term
|
Definition
Raw Data—answers to questions Example—How old are you? 17 19 16 17 18 16 18 17
Array—an ordered list of scores Example—16 16 17 17 17 18 18 19
Frequency Distribution—grouping scores into a series of categories Example— Age Frequency Distribution 16 2 17 3 18 2 19 1 |
|
|
Term
Computing the Standard Deviation |
|
Definition
Compute the mean—sum of the scores divided by the number of scores.
Formula for the Mean: X = ∑X N Subtract the mean from each score— ( X – X ) .
Note: When you total up the sum of the differences the total is always zero.
Formula: ∑ (X – X ) = 0 Square the difference for each score—( X – X )². Total up the squared differences to get the sum of squares.
Formula for the Sum of Squares:
∑ ( X – X )² Divide the sum of the squares by the number of scores to get the variance.
Formula for Variance:
∑ ( X – X )² N-1 Take the square root of the variance, which is the standard deviation.
Formula for Standard deviation: ∑ ( X – X )² N |
|
|
Term
|
Definition
Z score = (Score – Mean) Standard Deviation
Formula for Z-score:
Z = X – X |
|
|