Term
Units of analysis
What are we studying? |
|
Definition
Individuals Groups Organizations Social Artifacts |
|
|
Term
|
Definition
police, gang members, students, etc. |
|
|
Term
|
Definition
multiple persons with the same characteristics |
|
|
Term
|
Definition
formal groups with established leaders and rules |
|
|
Term
|
Definition
Products of social beings and their behavior. |
|
|
Term
|
Definition
Drawing conclusions about groups from individual level data - cause of stereotypes |
|
|
Term
|
Definition
Drawing conclusions about individuals from group level data - effect of stereotypes |
|
|
Term
Examples of hasty generalization |
|
Definition
joe a canadian is against gun control - therefore all canadians must be against gun control |
|
|
Term
examples of ecological fallacy |
|
Definition
presidential elections of 2000, 2004, and 2008 - wealthier states tended to vote democratic and poorer states tended to vote republican -- so from this data, people might think that wealthier voters tend to vote democratic and poorer voters tend to vote republican BUT |
|
|
Term
what causes juvenile delinquency? |
|
Definition
deterministic characteristics - lack of parental supervision - peer group association - early childhood experiences - amount/kind of education free will aspects - why didn't you decide to slack off in school? |
|
|
Term
Probabilistic Causal Model |
|
Definition
Necessary cause - a condition that MUST be present for the effect to occur -- being charged before being convicted Sufficient cause - a condition that, if it is present, will pretty much guarantee that the effect will occur -- pleading guilty before being convicted |
|
|
Term
|
Definition
In order for it to be true that "john is a bachelor" it is necessary that it be also true that he is - male - adult - unmarried to state "john is a bachelor" implies john has each of those three additional predicates |
|
|
Term
|
Definition
- stating that "john is a bachelor" implies that john is male - knowing that john is a bachelor is sufficient to know that he is male |
|
|
Term
|
Definition
being at least 30 years old is necessary of serving in the U.S. Senate. If you are under 30 years old then it is impossible for you to be a senator. -- therefore, if you are a senator, you are at least 30 years old |
|
|
Term
|
Definition
A U.S. president's signing a bill that congress passed is sufficient to make the bill law. note that the case whereby the president did not sign the bill e.g. - through exercising a presidential veto, does not mean that the bill has not become law (it could still have become law through a congressional override). |
|
|
Term
Studying Human Behavior
- Criteria for causality |
|
Definition
- empirical relationship between variables - temporal order - no alternative explanations |
|
|
Term
Statistical Conclusion Validity |
|
Definition
hypothesis testing is there a statistical relationship between the change in the suspected cause and the change in the suspected effect? -- sample size is important! |
|
|
Term
|
Definition
Cross-Sectional Design - studying sample of subjects at one point in time - collecting data once Longitudinal Design - studying subjects over a long period of time - repeated measures |
|
|
Term
|
Definition
Trend (ex - UCR) - study changes within some general population over time Cohort (ex - wolfgang) - examine more specific populations as they change over time Panel (ex - NCVS) - similar to trend, but the same set of people is interviewed on two or more occasions |
|
|
Term
Approximating Longitudinal Studies |
|
Definition
Retrospective research - asks people to recall their past for the purpose of approximating observations over time Prospective Research - Follows subjects forward in time |
|
|
Term
Longitudinal Designs
Advantages & Disadvantages |
|
Definition
Advantages - more representative of true phenomenon - convergence of methodology Disadvantages - time - resource demands - multiple threats to internal validity |
|
|
Term
No Alternative Explanations |
|
Definition
Internal Validity the extent to which your study actually measures what it says it is measuring - does x actually cause y? - is there any reason why you may be able to conclude that something else (Z)? caused Y? |
|
|
Term
Three Classes that Scientists Measure |
|
Definition
Direct Observables - those things or qualities we can observe directly Indirect Observables - require relatively more subtle, complex, or indirect observations for things that cannot be observed directly Constructs - theoretical creations, cannot be observed directly or indirectly |
|
|
Term
Progression of Measurement
(CCOM) |
|
Definition
Conceptualization Conceptual Definition Operational Definition Measurements in the real world |
|
|
Term
Exhaustive and Exclusive Measurement |
|
Definition
Exhaustive you should be able to classify every observation in terms of one of the attributes composing the variable Mutually Exclusive you must be able to classify every observation in terms of one and only one attribute |
|
|
Term
|
Definition
what is your current age 20-24 25-29 30-34 |
|
|
Term
Not Mutually Exclusive example |
|
Definition
What is your current age? 10 or less 10 to 20 20 to 30 30 to 40 40 to 50 50 or greater |
|
|
Term
Both Exhaustive and Mutually Exclusive example |
|
Definition
what is your current age? less than 18 18-29 30-39 40-49 50 or older |
|
|
Term
|
Definition
Reliability - assures that the scale can consistently measure something Validity - assures that the scale can measure what it is intended to measure |
|
|
Term
|
Definition
observed score - true score -- perfect measure with no internal or external influences - measurement error -- slop
ex - child getting 85% on a spelling test |
|
|
Term
Catergories of Measurement Error |
|
Definition
mistakes stable attributes situational factors transient states test characteristics |
|
|
Term
|
Definition
X = T + e
X - observed score T - true ability e - error |
|
|
Term
Methods of accessing reliability |
|
Definition
Test-retest method Inter-rater method Split-half method - the method you use depends on the type of measure you are using |
|
|
Term
|
Definition
compare measurements from different observers who are rating the same persons, events, or places. # of agreements over total # of ratings |
|
|
Term
increasing inter-rater reliability |
|
Definition
standardize procedures train observers use good operational definitions |
|
|
Term
|
Definition
when researchers measure a phenomenon that does not change between two points separated by an interval of time. - how similar are the scores? |
|
|
Term
increasing test-retest reliability |
|
Definition
use good operational definitions clarify items pre-test potential test items ensure that your items are valid - important to have content validity |
|
|
Term
|
Definition
make more than one measure of any concept see if each measures the concept differently |
|
|
Term
increasing split-half reliability |
|
Definition
use good operational definitions clarify items pre-test potential test items preset questions in a random order and test for split-half reliability - if you have the capability to do so ensure that your items are valid - important to have content validity |
|
|
Term
increasing inter-rater reliability |
|
Definition
standardize procedures clarify items train observers use good operational definitions |
|
|
Term
relationship between reliability and validity |
|
Definition
reliability does not mean validity validity infers reliability |
|
|
Term
|
Definition
does it appear to measure what it is supposed to measure?
sometimes we don't want our measures to have face validity - why? because sometimes people will answer questions differently trying to conceal/express something |
|
|
Term
|
Definition
does the measure cover the range of meanings included in the concept? measures the extent to which a test represents all possible items |
|
|
Term
criterion-related validity |
|
Definition
compares a measure to some external criterion - concurrent correlates with current outcomes - predictive correlates with future outcomes |
|
|
Term
|
Definition
whether your variables are related to ch other in the logically expected direction - convergent validity our measure should correlate with measures of similar constructs - discriminant validity our measure shouldn't correlate with measures of unrelated constructs |
|
|
Term
|
Definition
offender victim offense - an individual act of burglary, auto theft, bank robber, etc. incident - one or more offenses committed by the same offender or group of offenders acting in concert, at the same time and place |
|
|
Term
|
Definition
monitoring agency accountability research |
|
|
Term
theoretical relationship between crimes committed and official statistics |
|
Definition
crimes undiscovered crimes discovered crimes reported crimes recorded |
|
|
Term
|
Definition
type 1 offenses - index crimes murder and non negligent manslaughter forcible rape burglary larceny-theft robbery aggravated assault motor vehicle theft arson type 2 offenses - non index crimes 22 other crimes |
|
|
Term
uniform crime reports
positives |
|
Definition
can compare agencies quick, easy, efficient index offenses are valid indicators of public's crime concerns |
|
|
Term
uniform crime reports
negatives |
|
Definition
doesn't count ALL crimes variety of completeness of data both across location and across time variety of definition of crime across time creative work with crime statistics hierarchy rule crime rate is unweighted victim characteristics are deemphasized demographic shifts can skew data |
|
|
Term
national incident-based reporting system |
|
Definition
incident-based instead of summary based reports expanded offense reporting new offense definitions elimination of hierarchy rule greater specificity of data crimes against society attempted versus completed crimes designation of computer crime quality control |
|
|
Term
national crime victimization survery
positives |
|
Definition
Model for spurring international imitation Opportunity to obtain a picture of victims and their characteristics More accurate estimate of certain crimes like rape and assault Assesses issues such as fear of crime, satisfaction with police services, attitude toward the police, and reasons for not reporting crimes to the police. |
|
|
Term
National crime victimization survery
negatives |
|
Definition
cost of large samples false reporting misteken interpretation of incidents poor memory telescoping over reporting and underreporting sampling bias human mistakes in coding and mechanical errors |
|
|
Term
|
Definition
expanded list of questions computer-assisted telephone interviewing technology altered the scope of crimes measured revised screening questions rephrasing of many questions increase the threshold for series victimizations |
|
|
Term
national survey on drug use and health |
|
Definition
based on a nationwide sample of households - 12 year olds and above questions regarding use of illegal drugs, alcohol, and tobacco distinguishes between lifetime, current and heavy use measurement error? - do people tell the truth - non-sampled population |
|
|
Term
|
Definition
differences from others - targets a specific population - asks a broader variety of questions - subset receives a follow up questionnaire (when college students) |
|
|
Term
drug serveillance systems |
|
Definition
Arrestee drug abuse monitoring (ADAM) - provides ongoing assessment of drug use among arrestees Drug abuse warning network (DAWN) collects emergency medical treatment reports for "drug episodes" from a sample of hospitals |
|
|
Term
measuring crime for specific purposes |
|
Definition
local crime and self-report surveys incident-based crime reports observing crime |
|
|
Term
|
Definition
establishing existence of cause-effect relationship - systematically manipulating one or more variables exerting control rule out alternative explanations |
|
|
Term
three important factors in experimental research
o o
x
o o |
|
Definition
independent or dependent variables pretesting and post testing experimental and control groups O stands for observation X stands for manipulation/variable of interest |
|
|
Term
experiemental research
advantages/disadvantages |
|
Definition
advantages can establish causal relationships - if extraneous variable are controlled disadvantages cant manipulate some variables reduced generalizability |
|
|
Term
|
Definition
between-subjects designs - different participants are exposed to different conditions within-subjects designs - same participants are exposed to different conditions mixed designs at least one IV is manipulated between-subjects and at least one IV is manipulated within-subjects |
|
|
Term
XO
researcher wants to study effect of a reading program on reading achievement |
|
Definition
one-shot case study (x is the manipulated/variable of interest) (o stands for observation)
she implements the program at beginning of year and measures achievement at end of year |
|
|
Term
do not know whether the students reading skills actually changed from the start to end of the school year
(improve by giving a pretest at the start of the study)
O X O |
|
Definition
also known as: one-group pretest-posttest scores could be influenced by other instruction in school, the students' maturation, or the treatment |
|
|
Term
our researcher may wish to have a comparison group
OXO
O O |
|
Definition
also known as: static-group pretest-protest end of year reading scores still could be influenced by other instruction in school, the students' maturation, or the treatment |
|
|
Term
|
Definition
static-group comparison - if our researcher believes that the pretest has an impact on the results of the study, she might not include it. |
|
|
Term
random assignment to groups should spread the variety of extraneous characteristics that subjects possess equally across both groups
R XO
R O |
|
Definition
randomized posttest-only, control group because our researcher did not pretest, she might wish to randomly assign subjects to treatment and control group |
|
|
Term
|
Definition
randomized pretest-posttest control group of course, our researcher could include a pretest with her random assignment |
|
|
Term
|
Definition
randomized solomon four-group occasionally researchers combine randomized pretest-posttest control group design with the randomized posttest-only control group design |
|
|
Term
one of the pretest groups is assigned to treatment and one of the non-pretest groups is assigned to treatment
R OXO
R O O
R XO
R O |
|
Definition
with the randomized solomon four-group design, all groups are randomly assigned and given the posttest. two of the groups are given pretests |
|
|
Term
|
Definition
experimental participants are distributed into various groups on a random (nonsystematic) basis
all participants have an equal change of being assigned to any of the treatment groups - drawback? cannot guarantee equality if only have small numbers |
|
|
Term
threats to internal validity |
|
Definition
experimenter expectancy effects subject reactivity history maturation instrumentation statistical regression selection bias experimental mortality causal time order diffusion/imitation of treatment compensatory rivalry demoralization |
|
|
Term
threats to internal validity
- experimenter expectancy effects |
|
Definition
experimenter influences DV - expectancies controls - double blind testing - automation |
|
|
Term
threats to internal validity
subject reactivity
controls |
|
Definition
subject reactivity: subjects behavior a function of both the independent variable and something else - demand characteristics: participants believe they know the hypothesis or how they are supposed to behave in the course of the study - hawthorne effect: changes in behavior given knowledge that you are being observed controls: double blind testing concealment/deception habituation use involving research designs |
|
|
Term
threats to internal validity
history effects
maturation effects |
|
Definition
history effects: changes over time resulting from events that occur during the course of the experiment control: "control" group
maturation effects: changes over time resulting from the aging process control: reduce time between measurements |
|
|
Term
threats to internal validity
instrumentation effects
statistical regression to the mean |
|
Definition
instrumentation effects changes in the measurement process control: reliability checks
statistical regression to the mean extreme scores tend to regress toward the mean on subsequent testings control: control group / large sample size |
|
|
Term
threats to internal validity
selection effect/group composition bias
experimental mortality |
|
Definition
selection effect/group composition bias: the way in which subjects are chosen control: random assignment to conditions
experimental mortality differential loss of subjects across conditions control: replace and report / pretest and compare dropouts to continued participants on earlier measures |
|
|
Term
threats to internal validity
causal time order
diffusion/imiation of treatment |
|
Definition
causal time order ambiguity about order of stimulus and DV control: longitudinal design
diffusion/imitation of treatment when experimental and control groups communicate, experimental group may pass on elements to control group control: separated groups with matched-design |
|
|
Term
threats to internal validity
compensatory treatment
compensatory rivalry |
|
Definition
compensatory treatment control group is deprived of something to be of value control: compensate control group afterwards, double blind testing
compensatory rivalry control group is deprived of the stimulus may try to compensate by working harder control: keep experimental and control groups physically separated |
|
|
Term
threats to internal validity
demoralization |
|
Definition
feelings of deprivation result in control group giving up control: keep experimental and control groups physically separated |
|
|
Term
other types of validity:
generalization
construct validity |
|
Definition
construct validity: how well the observed cause and effect relationship represents the underlying causal process in which a researcher is interested - remember we talked about convergent and discriminant validity?
related to: experimental realism the extent to which experimental procedures have an impact on participants the extent to which events in the experiment are credible, involving, and taken seriously |
|
|
Term
generalization
external validity |
|
Definition
generalizability from a relationship observed in one setting to the same relationship in another setting - replication enhances external validity
related to: mundane realism the extent to which an experiment is similar to situations encountered in everyday life |
|
|
Term
quasi-experimental designs |
|
Definition
when randomization isn't possible for legal, ethical, or practical reasons - a research design that includes most (but not all) aspects of an experimental design
inherent threats to internal validity |
|
|
Term
nonequivalent groups designs |
|
Definition
if we cannot randomize, we cannot assume equivalency match subjects in experimental and control groups using important variables likely related to DV under study - comparison group |
|
|
Term
nonequivalent group designs |
|
Definition
aggregate matching cohort designs - necessary to ensure that two cohorts being examined against one another are actually comparable |
|
|
Term
variable-oriented research
advantages/disadvantages |
|
Definition
advantages: rich source of ideas for developing hypotheses. complement to nomothetic study of behavior.
disadvantages: difficulty drawing cause and effect conclusions. external validity. experimenter biases |
|
|