Term
|
Definition
Quasi-Experiments and Small-N Designs |
|
|
Term
|
Definition
Descriptive research; can be conducted either qualitatively or quantitatively. |
|
|
Term
|
Definition
Designed to describe and measure the behavior of people or animals as it occurs in their everyday lives; in many cases, the only possible approach to collecting data
Ex. Hypothesis: Pedestrians who are talking on cell phones more likely to walk into traffic - because people have a limited amount of cognitive resources, when on cell phone, those resources are redirected away from 'common sense' behaviors. |
|
|
Term
|
Definition
Extent to which the research is conducted in situations similar to the everyday life experiences of the participants...say, "Has ecological validity".
--Could you conduct the "cell phone" study in a research lab? |
|
|
Term
|
Definition
Making observations of behavior; recording those observations in an OBJECTIVE manner; routinely used in: Psychology, Anthropology, Sociology, and other fields. |
|
|
Term
Participant VERSUS Observer |
|
Definition
The researcher may choose either to be a PARTICIPANT in the observational research by interacting with other participants or to remain an OBSERVER of the setting. |
|
|
Term
Acknowledged VERSUS Unacknowledged |
|
Definition
The researcher must also decide whether to ACKNOWLEDGE THE OBSERVATION is occurring to the people being observed or to remain UNACKNOWLEDGED. |
|
|
Term
Types of Observational Research Designs |
|
Definition
1. Acknowledged participant 2. Unacknowledged participant 3. Acknowledged observer 4. Unacknowledged observer |
|
|
Term
|
Definition
Involved in the situation, WITH the participants' knowledge.
Problems: 1. Is unable to hide his or her identity as a scientist or it is unethical to do so 2. May experience problems with reactivity |
|
|
Term
Unacknowledged participant |
|
Definition
Involved in the situation, WITHOUT the participants' knowledge.
Problems: 1. may get those being observed to reveal personal or intimate info about themselves and their social situation 2. May have difficulty remaining objective 3. May influence the process of being observed |
|
|
Term
|
Definition
Only observing the situation, WITH the participants' knowledge.
Problems: Similar to ack/unack participant problems, plus don't get as close to participants. |
|
|
Term
|
Definition
Only observing the situation, WITHOUT the participants' knowledge.
Problems: Similar to ack/unack participant problems, plus don't get as close to participants. |
|
|
Term
|
Definition
Descriptive records of one or more individual's experiences and behaviors (data are based on only a small set of individuals, perhaps only one or two)
Often based on the experiences of only a very limited number of unusual individuals (i.e. The Wild Child, Doomsday Cult). |
|
|
Term
Systematic Coding Methods |
|
Definition
|
|
Term
|
Definition
Involves specifying ahead of time exactly which observations are to be made on which people and in which times and places.
-Behavioral categories.... |
|
|
Term
|
Definition
Defined before the project begins; based on theoretical predictions.
Ex: The cell phone study - What behavior might we want to observe? |
|
|
Term
|
Definition
1. Event sampling 2. Individual sampling 3. Time sampling 4. Archival research |
|
|
Term
|
Definition
Focus on specific behaviors that are theoretically related to research question (i.e. aggressive behavior, helping, etc.) |
|
|
Term
|
Definition
Randomly selects one participant to be the focus of all the observers for an observational period. |
|
|
Term
|
Definition
Involves each observer focusing on a single participant for a time period before moving on to another participant. |
|
|
Term
|
Definition
Based on analysis of any type of existing records of public behavior, such as: -Newspaper articles -Speeches and letters of public figures -Television and radio broadcasts -Internet Web sites -Existing surveys |
|
|
Term
Content analysis of archival research |
|
Definition
Essentially the same as systematic coding of observational data: includes the specification of coding categories; uses more than one rater.
Often the interpretation of events will vary for observers, even in the interpretation of recorded information. |
|
|
Term
Writing a Research Paper - APA Format |
|
Definition
-Title Page -Abstract -Introduction -Method -Participants -Materials -Procedure -Results -Discussion -References -Footnotes -Tables -Figures -Appendix |
|
|
Term
|
Definition
|
|
Term
Quasi-experimental research designs |
|
Definition
Used to make comparisons among different groups of individuals who cannot be randomly assigned to the groups.
Independent variable or variables are measured, rather than manipulated.
Correlational, not experimental.
Some similarity to experimental research as the IV involves a grouping. |
|
|
Term
|
Definition
Occur because individuals select themselves into groups, rather than being randomly assigned to groups. |
|
|
Term
|
Definition
Participants may be able to guess the research hypothesis.
They may respond differently to the second set of measures than they otherwise would have. |
|
|
Term
|
Definition
Occurs when participants drop out of the study and do not complete the second measure.
Participants who stay with the program may be different from those who drop out. |
|
|
Term
|
Definition
Involve potential changes in the research participants over time unrelated to the IV. |
|
|
Term
|
Definition
Occur due to the potential influence of changes in the social climate during the course of a study. |
|
|
Term
|
Definition
When a variable is measured more than once, individuals tend to score more toward the average score of the group on the second measure than on the first measure, even if nothing has changed between the two measures. |
|
|
Term
Misinterpreting Results as a Result of Regression to the Mean |
|
Definition
Problematic because the farther a group is from the mean, the greater the regression to the mean will be.
Unreliable measures are more likely to produce regression to the mean. |
|
|
Term
|
Definition
Dependent measure is assessed. -For one or more groups more than twice. -At regular intervals. -Both before and after the experience of interest occurs. |
|
|
Term
Participant-variable design |
|
Definition
When the grouping variable involves preexisting characteristics of the participants. |
|
|
Term
|
Definition
The variable that differs across the participants. |
|
|
Term
Single-participant research designs |
|
Definition
Tracking the behavior of individuals over time makes it possible to draw conclusions about the changes in behavior of a single person.
A-B-A design or reversal design. |
|
|
Term
Program evaluation research |
|
Definition
Designed to study intervention programs to determine whether the programs are effective in helping the people who make use of them.
There are threats to its internal validity.
Uses a longitudinal design.
Difficulty controlling what occurs during that time. |
|
|
Term
|
Definition
External Validity (Replicability, Generalization and the “Real World”) |
|
|
Term
|
Definition
A second major set of potential threats to the validity of research (we've already talked about Internal Validity).
The extent to which the experiment allows conclusions to be drawn about what might occur outside of or beyond the existing research.
Any research, even if it has high internal validity, may be externally invalid. |
|
|
Term
|
Definition
Extent to which relationships among conceptual variables can be demonstrated in a wide variety of people and a wide variety of manipulated or measured variables. |
|
|
Term
Generalization Across Participants |
|
Definition
Goal of experimental research.... |
|
|
Term
Goal of experimental research |
|
Definition
Elucidate underlying causal relationships among conceptual variables.
Unless the researcher has reason to believe generalization will not hold, it is appropriate to assume a result found in one population will generalize to other populations. |
|
|
Term
Generalization Across Settings |
|
Definition
The uniqueness of an experiment makes it possible the findings are limited to the specific: -Settings -Experimenters -Manipulations -Measured variables |
|
|
Term
|
Definition
Repeat an experiment: in different places, with different experimenters, different operationalizations of the variables.
Increase the potential generalization by increasing ecological validity. |
|
|
Term
|
Definition
Experimental research designs conducted in a natural environment such as a library, a factory, or a school rather than in a research laboratory.
Generally have higher ecological validity than laboratory experiments.
-More External Validity, but...might have low Internal Validity. |
|
|
Term
|
Definition
The process of repeating previous research. |
|
|
Term
Four 'General' Types of Replication |
|
Definition
1. Exact Replications 2. Conceptual Replications 3. Constructive Replications 4. Participant Replications |
|
|
Term
|
Definition
Goal of Exact Replication: -Repeat a previous research design as exactly as possible. -See if an effect found in one laboratory or by one research can be found in another lab by another researcher. |
|
|
Term
|
Definition
Investigates the relationship between the same conceptual variables studied in previous research.
Tests the hypothesis using different operational definitions of the IV and/or the measured DV. |
|
|
Term
Constructive Replications |
|
Definition
Tests the same hypothesis as in the original experiment.
Adds new conditions to the original experiment to assess the specific variables that might change the previously observed relationship.
Moderator Variables: a variable producing an interaction of the relationship between two other variables such that the relationship between them is different at diff. levels of the moderator variable (i.e. Gender predicts Salary, Years of Employment - Moderator Variable, Gender and Years of Employment predict Salary). |
|
|
Term
|
Definition
A variable producing an interaction of the relationship between two other variables such that the relationship between them is different at different levels of the moderator variable:
--Gender Predicts Salary --Years of Employment – Moderator Variable --Gender and Years of Employment Predict Salary |
|
|
Term
|
Definition
Conducted using new types of participants.
Should be designed as a constructive replication in which both the original population and a new one are used. |
|
|
Term
Summarizing and Integrating Research Results |
|
Definition
Some things to keep in mind... -Every test of a research hypothesis is limited in some sense. -Some experiments are conducted in specific settings that seem unlikely to generalize. -Others are undermined by potential alternative explanations. -Every significant result may be invalid because it represents a Type 1 error. |
|
|
Term
|
Definition
Collections of experiments conducted in such a way they systematically study a topic of interest through conceptual and constructive replications over a period of time: Exact Replications, Conceptual Replications, Constructive Replications, Participant Replications. |
|
|
Term
|
Definition
Statistical technique using the results of existing studies to integrate and draw conclusions about those studies.
Provides a relatively objective method of reviewing research findings.
Specifies inclusion criteria indicating exactly which studies will or will not be included in the analysis.
Systematically searches for all studies meeting the inclusion criteria.
Uses the effect size statistic to provide an objective measure of the strength of observed relationships. |
|
|
Term
|
Definition
|
|
Term
|
Definition
Answers fundamental questions about behavior; questions considered at the theoretical level (not intended to address a specific, practical problem).
Ex: Studies designed to better understand the visual system, the capacity of human memory, motivations of a depressed person, or limitation of infant attachment system. |
|
|
Term
|
Definition
Investigates issues that have implications for everyday life and provides solutions to everyday problems; questions considered at the ‘real world’ level.
Ex: Exploring new treatments of mental disorders (i.e. for depression, autism, or eating disorders). Applied psychologists might also experiment with new methods for teaching math, evaluating teachers, or might be looking for better ways to identify those most at risk for depression, failing in school, or cheating. Even predicting who is likely to do well in college or a at a particular job. |
|
|
Term
|
Definition
Contain complete descriptions of the collected data and data analyses. |
|
|
Term
|
Definition
Only contain summaries or interpretations. |
|
|
Term
|
Definition
1. Construct validity 2. Statistical validity 3. Internal validity 4. External validity |
|
|
Term
|
Definition
Extent to which a measured variable actually measures the conceptual variable it is designed to assess.
"How well the variables in the study are measured or manipulated. Are the operational variables used in the study a good approximation of the constructs of interest"? |
|
|
Term
|
Definition
Extent to which those statistical conclusions are accurate and reasonable.
"How well a study minimizes the probabilities of two errors: concluding that there is an effect when in fact there is none (a "false alarm," or Type I error) or concluding that there is no effect when in fact there is one (a "miss," or Type II error); also addresses the strength of an association and its statistical significance (the probability that the results could have been obtained by chance if there really is no relationship)". |
|
|
Term
|
Definition
"In a relationship between one variable (A) and another (B), the degree to which we can say that A, rather than some other variable (such as C), is responsible for the effect on B". |
|
|
Term
|
Definition
"The degree to which the results of the study generalize to some larger population (do the results from this sample of children apply to all U.S. school children?), as well as to to other situations (do the results based on this type of music apply to other types of music?)". |
|
|
Term
Reliability vs. Validity in measurement |
|
Definition
The validity of a measure is not the same as its reliability. Reliability has to do with how well a measure correlates with itself (i.e. an IQ test is reliable if it is correlated with itself over time). But validity has to do with how well a measure is associated with some other similar, but not identical, measure (i.e. IQ test is valid if it's associated with another variable, such as school grades or life success).
If a test does not even correlate with itself (is NOT reliable), then how can it be more strongly associated with a measure of some other variable (validity)? A measure can be LESS valid than it is reliable, but it cannot be MORE valid than it is reliable. |
|
|
Term
3 Types of Research Methods |
|
Definition
1. Descriptive 2. Correlational 3. Experimental |
|
|
Term
|
Definition
Designed to answer questions about the current state of affairs (through surveys, interviews, naturalistic observation, etc).
CLAIM: Frequency
STATS: mean, median, mode, range
Mean: the average
Median: the value at the middlemost score of a dist. of scores - the score that divides a freq. dist. into halves.
Mode: the value of the most common score - the score that was received by more members of the group than any other.
Range: the area of variation between upper and lower limits on a particular scale - the difference between the largest and smallest values. |
|
|
Term
|
Definition
Involves the measurement of two or more relevant variables and an assessment of the relationship between or among those variables.
CLAIM: Association
STATS: Pearson correlation coefficient r, multiple correlation R
(One type of correlational research involves predicting future events from currently available knowledge). |
|
|
Term
|
Definition
Involves the active creation or manipulation of a given situation or experience for two or more groups of individuals (IV); followed by a measurement of the effect of those experience on thoughts, feelings, or behaviors (DV).
CLAIM: Causality
STATS: ANOVA (has F-test, Degrees of freedom, and p-value), F-ratio, Main effects, Interaction |
|
|
Term
Sampling from a population |
|
Definition
Unbiased versus Biased Sampling Types |
|
|
Term
|
Definition
Simple random sample, systematic sampling, cluster sample/multistage sample, oversampling, stratified random sample. |
|
|
Term
|
Definition
Self-selected, purposive (e.g. snowball sampling), easy to find (convenience samples) |
|
|
Term
|
Definition
Null versus Your Hypothesis |
|
|
Term
|
Definition
Assuming there is NO effect. |
|
|
Term
|
Definition
Assuming there IS an effect. |
|
|
Term
|
Definition
A "false positive"; reject the null hypothesis (conclude that there is an effect) when there really is no effect in the population. |
|
|
Term
|
Definition
A "miss"; retain the null hypothesis (conclude there is not enough evidence of an effect) when there really is an effect in the population. |
|
|
Term
|
Definition
Most experimental research designs include more than one independent variable.
Factor: refers to each of the manipulated independent variables.
Level: refers to each condition within a particular independent variable.
Ex: 2x3, 2x2, 2x2x4
Cells: the conditions in factorial designs. |
|
|