Term
5 Basic Principles of Scientific Research |
|
Definition
1. Public -idea that researchers make information available so that others may independtly verify their claims.
-typically means sampling procedures, questioning, words, & etc.
2. Objective -reserachers conduct same study & get same results -bias cant effect study concerned with facts not interpretations or impressions.
3. Empirical -looking at what you can measure. Test things that are falsifiable (proves wrong) 1. consitutive definition: md bglue crab is shell fish (culutral definitno) 2. operational defintion: (experience or measure or concept) MD BLue crab is a local experience; crabs can be boiled or made into crabcakes
4. Systematic & Cumulative -based on theory; builds theory. -theory: a set of related and logically consitent propositions that explains relationships among two or more concepts.
5. Predictive -good theory that predicts a phenemenon or series of events sucessfully and correctly. example 1: voting behavior example 2: consumer behiavor example 3: crosos communication (think bp) |
|
|
Term
|
Definition
1. Select a Problem 2. Review Existing Research and Theory 3. Develop specific hypotheses and questions 4. Determine Appropriate methodology/design 5. Collect Data 6. Analyze and Interpret Results-how do they square with theory and hypotheses? 7. Diseeminate findings 8. Replicate |
|
|
Term
What are some research Concerns? |
|
Definition
1. Validity -internal validity: a property of of a research study inh which the results are measured. -external validity: the degree to which the results are generalizable to other situatinos
2. Reliability 3. Data -original or secondary -original is preferred, because it enables you to tailor every aspect of data collection -secondary is still valuable and if often best way to communicate |
|
|
Term
|
Definition
1. Privacy -Anonymity -Confidentiality
2. Do not Harm -Concealment -Deception |
|
|
Term
|
Definition
A formal statement regarding the relationship between variables and is tested directly |
|
|
Term
What is a Research Question |
|
Definition
A formally statement regarding the relationship between variables and is tested directly. |
|
|
Term
A good research question is: (10) |
|
Definition
1. maneageable in scope 2. effectively studied/measured 3. leads ot data carefully analyzed 4. sucessfully address "so what?" 5. is generalizable 6. feasible (time/money) 7. Parismony 8. ethical 9. builds upon and is consistent with previous reserach
example of good question: does watching violent tv programs on tv affect perceptions of real world violence?
bad question: how does watching violent programs on tv affect viewers? |
|
|
Term
Types of Qualitative Research |
|
Definition
A. Field observation b. focus groups c.indepth interviews d. case study e.ethnography f. qualitative resrach |
|
|
Term
Types of Qualitative Research |
|
Definition
1. Field Observation
2. Focus Groups
3. In-depth Interviews
4. Case Studies
5. Ethnographies
6. Online
|
|
|
Term
|
Definition
a. serves as a pilot study
b. interact with group under study
c. level of interaction/type of observation
-covert vs. overt
-observer vs.particpating
d. Be a part of and become immersed in experience
e. also called particpant observation
PROS: best way to understand group activity, cost effective, easy way ot gain access to hard ot reach groups, gather good pilot information |
|
|
Term
|
Definition
-best way to understand group activity
-cost effective
-easy way to gain access too hard to reach groups
-good pilot information
|
|
|
Term
|
Definition
-sampling concerns
-how to take notes?
-external validity: is cross validation possib;e?
-reactiviy: how do we respond to being observed?
-what's the exity strategy?
-how do we lead group withotu letting them know your collecting data? |
|
|
Term
|
Definition
-led by moderator
-structure follows moderator's guide/outline
-usually 6-10 particpants, but 8-10 ideal
-useful for generating key words and terms to be used in survey research
-particpants are from small targeted groups
-needs to have multiple groups, multiple locations
-lots of transcripts adn videos
-logistics (timing, incentives, recruiting, etc.) |
|
|
Term
|
Definition
-great way to generate words and descriptions of produce
-quick and dirty read on topic
-relatively cost effective
-conversational less inhibited response
-can pair with an intial response |
|
|
Term
|
Definition
-dominant vs quiet participants
-skills of moderator
-dont rally offer quantative data
-just counts
-samplign concerns: are focus groups represenative |
|
|
Term
|
Definition
A. also caleld intensive interviews
B. Conducted with small, select groups of people
C. open-ended as opposed to close ended responses
D. Really dependent upon the interviewer upon the interviewee
E. useful when you want to reach elite audience
|
|
|
Term
PROS of Indepth interview |
|
Definition
-high level of detail
-more accurate answers to sensitive questions
-maybe best way to reach certain audiences
-good way to do a series of meeting/interviewers |
|
|
Term
CONS of Indepth Interviews |
|
Definition
-generalizability of sample
-interviewer bias
-how to deal with another interviewers data
-those who say yes vs. total sample, attrition, cost, and scheduling |
|
|
Term
|
Definition
A. very broad program of research
B. Cases
-can be individual or a group of individuals
-can be an event or series of events
C. Variety of data sources and types of collections
D. Often criticized for lack of scientific rigor and concerns about generalizability
E. Defined by 5 stages:
1. Design
2. Pilot Study
3. Data Collection
4. Data analysis
5. Report Visiting
Example of Case Study: Don Draper |
|
|
Term
|
Definition
A. Allows for multiple types of data collection
B. Offers descriptive plot
C. useful pilot analysis
D. compare with prior research |
|
|
Term
|
Definition
A. Criticized for lack of scientific rigor
B. Is one case generalizable
C. Subjectivity
D. Does current case fitwith existing research paradigms |
|
|
Term
|
Definition
a. classic ethnographies embedded in resarchers new cultures.
B. contemporary ethnography
i. still puts researcher in middle of the group/topic
ii. studying RQ from partcipants frame of reference
iii. considerable field work
|
|
|
Term
|
Definition
A. Relative Anonymity
B. still only observing party of experience
C. technology issues
D. generalizability
E. Represenatives & samples
F. External Valid Conerns |
|
|
Term
Ethical Issues Surronding Qualitative Research |
|
Definition
A. All normal humans subjects concerns
B. Particular concerns about deception and privacy
C. Always be careful to project your research findings |
|
|
Term
|
Definition
A term that expresses an abstract idea foremd by generalizing from particulars and summarizing related observations.
Important for two reasons
1. they simplify the research process by combining particular characteristics, objects, or people into general categories.
2. concepts simplify communication among those who have a shared communication among them. Research uses them their observations into meaningful summaries and to transmit this information into tohers. |
|
|
Term
|
Definition
A. Is a concept that has three distinct characteristics:
1. It is an abstract idea that is usually broken down into dimensions represented by lower-level concepts; a construct is a combination of concepts.
2. Because of its abstraction, a construct usually cannot be observed directly.
3. A construct is usually designed for a specific research purpose so that its exact meaning relates only to the context in which it is found. |
|
|
Term
What is a variable?
(p.44) |
|
Definition
A. The empirical counterpart of a construct or concept.
B. Important because they link the empirical world with the theoritcal; they are the phenomena and events that are measured or manipulated in research.
C. Researchers try to test a number of associated variables to develop an underlying meaning or relationship among them. |
|
|
Term
|
Definition
A. The most important variables that are kept after suitable analysis.
B.They tend to define or highlight the construct under the study.
C. After additional studies, new marker variables may be added to increase understanding of the construct and to allow for more reliable predictions |
|
|
Term
|
Definition
A. Systematically varied by reseracher |
|
|
Term
Dependent variables (p.44) |
|
Definition
A. Are observed and their values are pesumed to depend on the effects (influence) of the independent variables.
B. The dependent variables is what the researcher wishes to explain.
.
|
|
|
Term
|
Definition
A. Ensure that the results of the study are due to the independent variables not to another source.
B. Control variables need not always be used to eliminiate an unwanted influence.
C. On occassion, researchers use a control varible such as age, gender, or socioeconomic status to divide subjects into specific relevant categories |
|
|
Term
|
Definition
-all the variables that may create spurious or misleading results.
|
|
|
Term
|
Definition
A. really get to know subjects and communities
b. takes advantage of multiple reserach methods
C. long fieldwork perod makes research subjects more comfortable around researchers |
|
|
Term
Discrete Variables (o, 44) |
|
Definition
A. includes onlya finite set of values B. It cannot be divided into two subparts
Example: The number of children in a fmly is a discreate variable becasue teh unit is a person |
|
|
Term
|
Definition
A. Can Take on any value, including fractions, adn can be meanigfully broke into smaller subsesctions.
B. Height is a continous variable. If the measuremnt tool is sophisticated enough it sis possible to dinstuish between on persno 72.12 inches tall and another 72.13 inches tall/ |
|
|
Term
Predictor variable (p.45) |
|
Definition
A. the variable that is used for predictions or is assumed to be casual (analgeous with the independent variables. |
|
|
Term
|
Definition
the variable that is predicted or assumed to be affected (analgous to the dependent variable |
|
|
Term
|
Definition
1. Nominal 2. Ordinal 3. Interval 4. Ratio |
|
|
Term
Nominal Measurement (p.51) |
|
Definition
A. The Weakest form B. Equivalence (everything is equal) categorical C. -categoreis are exhaustive & mutally exclusion -typically reduced to a series of "dummy variables" -Basic example: Religion, Region, Cars (old, new, foreign or domestic, hav ea car dont have a car, colors, political party) |
|
|
Term
Ordinal level of Measurement (p.52) |
|
Definition
-compare across categories through rnaking like olympics) -have the property of equivalence -usually ranked from some dimesnion from smaller to largest -posses the property of order among the categories, any property can be higher or lower than a category |
|
|
Term
|
Definition
A. When the scale hs all the properties of an ordinal scale and the intervals between the adjacent points on the scale are of equal value. B. no true zeroesi n the measurement C. Most obvious example if temperature |
|
|
Term
|
Definition
A. Have all the properties of interval scales plus one more: the existence of a true zero point. With the introduction of this true zero point ratios can be made. |
|
|
Term
|
Definition
A composite or combined measure of mulitple variables is a scales.
-different types level of measurement |
|
|
Term
Three types of scales (p.55 |
|
Definition
1. Simple Scale 2. specialized 3. semantic |
|
|
Term
|
Definition
a. mutiple types of variables (thurston, guttman) b. likert is most popular for mass media research c.-series of related statments -responses code low to high -can be combined to creat ea more reliable scale/measurement |
|
|
Term
|
Definition
A. rate a list of items on given scales b. genreallyt eh mroe points or numbers on scales, the better the variation c. better to have higher number on indicate agreement or more favorable evaluation
Ex. customer service rating 1=poor, 2=fair, 3-good, 4=very good, 5=excellent 1. promt service 2. knowledgeable sales people 3. explanation of return policy |
|
|
Term
|
Definition
a. measure attitudes toward a concep tusing bioplar adjectives anchors "relaible/unreliable", "equal/unequal" B. adjectives shoudl be unique to a particular research question C. Helps to give a sesne of the subjects semantic space
ex. evolutions of candidates x's campaign for governor
Experienced xxxxxx Inexperienced |
|
|
Term
|
Definition
a. if it constatnly gives the same answer b. measures are imperfect and even when done properly vary based on random error. c. example easy grades, voting, survey results, wine ratings
three kinds of reliablity: 1. stability-test retest 2. internal consistency: (split half, repsonses, of first half of multi-term scale correlate wtih second half) 3. equivalency: (cross test/interdependency) |
|
|
Term
|
Definition
A. a valid measuring device measure what it is suppose to be measured, or to put it another way, determing validity requires an evalution of the congruence between the oeprational defition of a variable and its conceptual or consitutive defintion. Four types of validity: 1. Face Validity 2. Predictive Validity 3. Concurrent Validity 4. Construct Validity |
|
|
Term
|
Definition
A. h0 b. hypothesis of no difference c. the logical alternative to the research hypothesis. d. reserachers rarely state null hypothesis |
|
|
Term
statiscal signifigance
(p.296) |
|
Definition
When researchers find that the results of a study are nonsignificant it is common to downplay the results to demphazie the finding that the results were not statiscally significant |
|
|
Term
|
Definition
are estimates of the probability that the results are obtained are due to chance (rather than waht we thisi s responsible for the results) |
|
|
Term
|
Definition
the probability (for examkple, .05 or .01) of rejecting a null hypothesis that is, in fact, true; also called the alpha level. |
|
|
Term
|
Definition
A. A probability distribution of all possible values of a statistic that would occur if all possible samples of fixed size were taken from a given population.
|
|
|
Term
|
Definition
A. alpha error
b. We might rejct the null when it is actually true.
c. is equal to the established signifigance and is therefore under the direct control of the resracher.
d. to reduce the probability of type 1 error the resracher can smiply set the lvel of signifigance closer to zero
|
|
|
Term
|
Definition
A. Beta error
B. We might accept the null when it is actually false.
c. researcher does not have direct control over type two error.
D. type two is controlled through indirectly the design of resraech.
D. inversely proportional to the level of type 1 error. (as type 1 decreases, type 2 increases) |
|
|
Term
|
Definition
the degree to which measurements obtained from a sample differ from the measurements that would be obtained from the population. |
|
|
Term
|
Definition
variables that create spurious or misleading results |
|
|
Term
non-sampling error (p.88) |
|
Definition
error created by every other apsect of a research study, such as measurement errors, data analysis errors, the influence of the research situation itself, or even error from an unknown source that can never be indentifed and control or eliminated.
Two types:
1. random error: relates to problems where measurements and analyses vary nconsitently from one study to another. Results from one study may lean in one direction but then lean in the opposite direction when the study is repeated at a later time.
2.systematic error: consistently produces incorrect (invalid results) in the same direction or same contest, and is therefore, predoctable. Reserachers are however able to identify the cause of systematic erro. |
|
|
Term
|
Definition
a form of measurment such as 10 point scales, likert, guttman, or semantic differential |
|
|
Term
|
Definition
a subset of the population that is represnative of the entire populaition |
|
|
Term
|
Definition
a group or class of subjects, variables, concepts or phenomena. |
|
|
Term
|
Definition
A. does not follow the guildeines of mathmatical probability .
B.non probability does not allow reserachers to calculat ethe sampling error present in a research study.
Three Types:
A. Convenience/volunteer
B. Purpose/quota
C. Snowball |
|
|
Term
Probability Sampling (p.97) |
|
Definition
a. uses mathmatical guidelines whereby each unit's chancers for selection is known.
Types of probability sampling
1. simple random sample: (RDD, ABS)
2. systematic random sampling
3. sampling interval |
|
|