Term
Who is responsible for ethics of research? |
|
Definition
It applies to all psychologists including students. |
|
|
Term
What are the 5 principles for ethical conduct?
|
|
Definition
1) Beneficence and Nonmalificence
2) Fidelity and Responsibility
3) Integrity
4) Justice
5) Respect for People's Rights and Dignity |
|
|
Term
Beneficence and Nonmalificence: |
|
Definition
Psychologists strive to benefit those with whom they work and take care to do no harm. |
|
|
Term
Fidelity and Responsibility: |
|
Definition
Psychologists establish relationships of trust with those with whom they work. They are aware of their professional and scientific responsibilities to society and to the specific communities in which they work. |
|
|
Term
|
Definition
Psychologists seek to promote accuracy, honesty, and truthfulness in science, teaching, and practice psychology. |
|
|
Term
|
Definition
Psychologists recognize that fairness and justice entitle all persons to access to and benefit from the contributions of psychology and to equal quality in the processes, procedures, and services being conducted by psychologists. |
|
|
Term
Respect for People's Rights and Dignity: |
|
Definition
Psychologists respect the dignity and worth of all people, and the rights of individuals to privacy, confidentiality, and self-determination. |
|
|
Term
Name of the entity that you submit your research proposal to? |
|
Definition
Institutional review board. |
|
|
Term
Institutional Review Board: |
|
Definition
It protects humans, researchers, society, participants, and the university. |
|
|
Term
When do you engage the IRB in research? |
|
Definition
Before the beginning of the research, and after gaining a proposal. |
|
|
Term
Institutional Animal Care and Use Committee (IACUC): |
|
Definition
-protects rights and welfare of animal subjects
-committee members
-care and housing of animals |
|
|
Term
Costs and benefits of a research project effect and benefit what? |
|
Definition
-participants
-society
-the researcher and institution |
|
|
Term
|
Definition
-Is the research worth it?
-Benefits greater than risks?
-Study produce valid and interpretable results? |
|
|
Term
|
Definition
1)Physical injury
2)Psychological injury (mental or emotional stress)
3)Social injury (Embarrassment) |
|
|
Term
|
Definition
Protect participants from all risk. |
|
|
Term
|
Definition
-harm or discomfort is not greater than what that experienced in daily life or during routine physical or psychological tests
-minimal risk differs across participants |
|
|
Term
|
Definition
-risk is grater than minimal
-increases reserachers ethical obligation to protect participants welfare
-find methods with lower risk |
|
|
Term
|
Definition
-social risk
-does not equal "anonymous"
-internet research = confidentiality is a special problem |
|
|
Term
How to increase confidentiality? |
|
Definition
-remove identifying information
-report results in terms of statistical averages |
|
|
Term
|
Definition
|
|
Term
What does informed consent make clear to participants? |
|
Definition
-nature of research (what they will do)
-possible risks |
|
|
Term
When is written informed consent required? |
|
Definition
-required when risk is greater than minimal
-not required when researchers observe public behavior |
|
|
Term
What does informed consent require? |
|
Definition
-to inform participants of all aspects of research that may influence their decision to participate
-allow to withdraw at any time without penalty
-no pressure |
|
|
Term
|
Definition
the right of individuals to decide who information about them is communicated to others. |
|
|
Term
What are 3 dimensions of privacy? |
|
Definition
1) Sensitivity of the information: more sensitive -> more private (sexual practices)
2) Setting: public settings -> less private (concerts)
3) Method of Dissemination of the Information: Sensitive information -> more protections (group averages) |
|
|
Term
|
Definition
-Occurs when information is withheld from participants.
-Occurs when participants are intentionally misinformed about aspects of the research.
-For the purpose of getting people to participate is always unethical |
|
|
Term
|
Definition
-allows study of people's natural behavior
-opportunity to investigate behavior and mental processes not easily studied without deception |
|
|
Term
|
Definition
-contradicts principle of informed consent
-relationship between research and participant is not open and honest
-frequent deception makes people suspicious about research and psychology |
|
|
Term
When is deception justified? |
|
Definition
-When the study is very important
-When there are no other methods are available
-When deception would not influence decision to participate |
|
|
Term
When deception is used, what must the researcher do? |
|
Definition
-inform participants of the reason for deception
-discuss any misconceptions
-remove any harmful effects |
|
|
Term
At what point during the experimental process do we address deception with the participant? |
|
Definition
|
|
Term
What is the goal of debriefing? |
|
Definition
participants should feel good about the research experience |
|
|
Term
APA Ethical Standards and IACUCs? |
|
Definition
-researchers are ethically obligated to protect welfare of animal subjects
-justify any pain, discomfort, death by potential scientific, educational, or applied goals. |
|
|
Term
Reporting Psychological Research: |
|
Definition
>Publication Credit
-acknowledge fairly those who contributed to a research project
-authorship based on scholarly importance of contributions |
|
|
Term
|
Definition
-don't present substantial portions or elements of another's work as your own
-"substantial portion or element" can be 1-2 words if it represents a key idea
-ignorance or sloppiness are not legitimate excuses
-cite sources appropriately. |
|
|
Term
When quoting someone what components to the citation do you need to have? |
|
Definition
Last name
Published Date
Page #
Quotation Marks // Block Quotation |
|
|
Term
Steps for Ethical Decision Making: |
|
Definition
-Find out facts (procedure, participants, etc)
-Identify the relevant ethical issue (risk, informed consent, privacy, confidentiality, deception, debriefing)
-Decide what is at stake for all parties (participants, researchers, institutions, society) |
|
|
Term
Why do psychologists conduct research? |
|
Definition
To test:
-Hypotheses of derived from theories
-Effectiveness of treatments and programs |
|
|
Term
Third goal of psychological research: |
|
Definition
Explanation - examine the causes of behavior |
|
|
Term
What must an experiment include? |
|
Definition
-Independent Variable (IV)
-Dependent Variable (DV) |
|
|
Term
|
Definition
-Manipulated (controlled) by experimenter
-At least two conditions (levels)
1)treatment
2)control |
|
|
Term
|
Definition
-Measured by Experimenter
-Used to determine effect of IV (in most experiments, researchers measure several dependent variables to assess effect of IV) |
|
|
Term
When does an experiment have internal validity? |
|
Definition
-When we can state confidently that the independent variable caused differences between groups on the dependent variable (causal inference)
-When alternative explanations for a study's findings are ruled out |
|
|
Term
What are the 3 conditions for causal inference? |
|
Definition
1) Covariation
2) Time-Order Relationship
3) Elimination of plausible alternative causes |
|
|
Term
|
Definition
-Relationship between IV and DV (A & B)
|
|
|
Term
Correlation does not imply ... |
|
Definition
|
|
Term
|
Definition
-The presumed cause precedes the effect
EX: Version of images (cause) was manipulated prior to measuring body dissatisfaction (effect)
|
|
|
Term
Elimination of Plausible Alternative Causes: |
|
Definition
Use control techniques to eliminate other explanations.
EX: If the 3 groups differ in ways other than the type of images they viewed, these differences are alternative explanations for the study's findings.
|
|
|
Term
|
Definition
-when the IV is allowed to covary with a different, potential independent variable
-an experiment that is free of confoundings has internal validity |
|
|
Term
What do confoundings represent? |
|
Definition
alternative explanations for a study's findings |
|
|
Term
What are 2 control techniques to eliminate alternative explanations? |
|
Definition
-holding conditions constant
-balancing
*proper use of control techniques = internal validity |
|
|
Term
Holding conditions constant: |
|
Definition
-Independent variable: groups in the different conditions have different experiences
-Experiences should differ only in terms of the independent variable.
|
|
|
Term
What is the only thing we allow to vary across groups? |
|
Definition
The IV conditions - everything else should be the same for the groups of the experiment. |
|
|
Term
|
Definition
some alternative explanations for a study's findings concern characteristics of participants. |
|
|
Term
Why can't some variables be held constant? |
|
Definition
because of different characteristics among participants. |
|
|
Term
What is the goal of balancing controls for alternative explanations due to subject characteristics? |
|
Definition
to make sure that an average, participants (as a group) in each condition are essentially equivalent |
|
|
Term
How do you balance subject characteristics across the levels of the experiments? |
|
Definition
-participants are assigned to conditions using some random procedure
-random assignment creates, on average, equivalent groups of participants in the experimental condtions
-rule out alternative explanations due to subject characeristics |
|
|
Term
Independent Groups Design: |
|
Definition
-different individuals participate in each condition of the experiment (ex: no overlap of participants across conditions)
|
|
|
Term
What are the 3 types of independent groups design? |
|
Definition
1) Random Groups Design
2) Matched Groups Design
3) Natural Groups Design |
|
|
Term
|
Definition
>Individuals are randomly assigned to conditions of the Independent Variable
-Groups of participants are equivalent, on average, before the IV manipulation
-Any differences between groups on dependent variable are caused by independent variable (if conditions are held constant)
|
|
|
Term
|
Definition
A "block" is a random order of all conditions in the experiemnt
EX: A random order of conditions A, B, C could be B C A |
|
|
Term
Advantages of block randomization: |
|
Definition
-creates groups of equal size for each condtion
-controls for time-related events that occur during course of experiment
-balances subject characteristics across conditions of the experiment |
|
|
Term
(Threats to Internal Validity)
When is the ability to make causal inferences jeopardized? |
|
Definition
-intact groups are used
-extraneous variables are not controlled
-selective subject loss occurs
-demand characteristics and experimenter effects are not controlled |
|
|
Term
|
Definition
-groups exist before experiment
-individuals are not randomly assigned to intact groups
-when intact groups (not individuals) are randomly assigned to conditions, subject characteristics are not balanced
-do not use intact groups |
|
|
Term
(Threats to internal Validity)
Extraneous Variables: |
|
Definition
-Practical considerations when conducting an experiment may create confoundings |
|
|
Term
What are some examples of extraneous variables? |
|
Definition
-number of participants in each session
-different experiemnters
-different rooms where experiment is conducted |
|
|
Term
How to control extraneous variables? |
|
Definition
1) Balancing
2)Holding conditions constant |
|
|
Term
|
Definition
Randomly assign extraneous variables across the conditions of the experiment |
|
|
Term
Holding Conditions Constant: |
|
Definition
-hold extraneous variables constant across the conditions of the experiment
EX: One experimenter conducts both treatment and control sessions |
|
|
Term
(Threats to Internal Validity)
Subject Loss (Attrition): |
|
Definition
-When participants fail to complete an experiment
-Equivalent groups formed at beginning of an experiment through random assignment may no longer be dquivalent |
|
|
Term
|
Definition
1)Mechanical Subject loss
2)Selective Subject loss |
|
|
Term
|
Definition
-equipment failure or experimenter error = in ability to complete experiment
-often due to chance factors
-likely to occur equally across conditions of experiment
-does not threaten internal validity of experiment |
|
|
Term
|
Definition
-occurs when participants are lost differentially across conditions
-some characteristic of participant is responsible for the loss
-the subject characteristic is related to the dependent variable |
|
|
Term
|
Definition
cues participants use to guide their behavior in a study |
|
|
Term
|
Definition
-used to assess whether participants expectancies contribute to outcome of experiment
-participants receive a placebo (inert substance) but believe they may be receiving an effective treatment
-if participants who receive the actual drug improve more than participants who receive the placebo, we gain confidence that the drug produced the beneficial outcome, rather than expectancies |
|
|
Term
|
Definition
Potential biases that occur when experimenters expectancies regarding the outcome of the experiment influence their behavior toward participants
-control by keeping experimenters and observers "blind" or unaware of the expected results |
|
|
Term
|
Definition
-procedures in which both participants and experimenters/observers are unaware of the condition being administered
-allows researchers to rule out participants and experimenters expectancies as alternative explanations for a study's outcome |
|
|
Term
Double-blind experiment controls what 2 things? |
|
Definition
-demand characteristics
-experimenter effects |
|
|
Term
We rely on statistical analysis to ... |
|
Definition
-claim an independent variable prodcued an effect on a dependent variable
-rule out the alternative explanation that chance produced differences among the groups in an experiment. |
|
|
Term
|
Definition
-best way to determine whether findings are reliable
-repeat experiment and see if same results are obtained |
|
|
Term
(Analysis of Experimental Designs)
What are the three steps? |
|
Definition
1)Check the Data
- errors? outliers?
2)Describe the results
-descriptive statistics such as means, standard deviations
3) Confirm what the data reveal
-inferential statistics |
|
|
Term
(Descriptive Statistics)
Mean (Central Tendency) : |
|
Definition
-average score on a DV, computed for each group
-not interested in each individual score |
|
|
Term
(Descriptive Statistics)
Standard Deviations (Variability): |
|
Definition
-average distance of each score from the mean of a group
-not everyone responds the same way to an experimental condition |
|
|
Term
(Descriptive Statistics)
Effect Size: |
|
Definition
Measure of the strength of the relationship between the IV and DV |
|
|
Term
(Descriptive Statistics)
Meta-analysis: |
|
Definition
-summarize the effect sizes across many experiments that investigate the same IV or DV
-select experiments to include based on their internal validity and other criteria
-allows researchers to gain confidence in general psychological principles |
|
|
Term
(Analysis of Experiments)
COMFIRM WHAT THE DATA REVEAL
You use inferential statistics to determine what? |
|
Definition
That the Iv had a reliable effect on the DV |
|
|
Term
What are the two types of interential statistics |
|
Definition
1. Null Hypothesis Significance Testing
2. Confidence intervals |
|
|
Term
Null Hypothesis Significance Testing |
|
Definition
-Statistical procedure to determine whether mean difference between conditions is greater than what might be expected due to chance or error variation.
- The effect of an IV on the DV is statistically significant when the probability of the results being due to chance is low. |
|
|
Term
What the are steps for Bull Hypothesis Testing? |
|
Definition
1. Assume the Null Hypothesis is true.
2. Use the sample means to estimate population means
3. Compute the appropriate inferential statistic
4. Identify the probability associated with the inferential statistic
5. Compare the observed probability with the predetermine level of significance (alpha), which is usually p< .05 |
|
|
Term
|
Definition
- sample means estimate population means
- confidence intervals provide the range of values that contains the true population mean
-performance in one experimental condition differes from performance in a second condition.
- Compute the confidence interval around the sample man in each condition. If confidence intercals do not overlap, we gain confidence that the population means for the conditions are different. That is, there is a difference among conditions. |
|
|
Term
What is External Validity ? |
|
Definition
The extent to which findings from an experiment can be generalized to describe individuals, settings, and conditions beyond the scope of a specific experiment.
* any single experiment has limited external validity
* external validity of findings increase when findings are replicated in a new experiment. |
|
|
Term
How do you increase external validity?? |
|
Definition
-Include charasteristics of situations, settings, and population to which researchers wish to generalize
-partial replications
- Feild experiments
- conceptual replications |
|
|
Term
|
Definition
* Random assignment requires large samples to balance subject characteristics
* sometimes only small samples are available
* In matched group design reasearchers select one or two individual differences variables for matching |
|
|
Term
What is the procedure for Matched Group Design?? |
|
Definition
1. Select matching variable
2. measure variable and order individuals scores
3. match pairs (or triples, quadruplets, etc. depending on number of conditions) of idential or similar scores
4. Randomly assign participants within each match to the different conditions. |
|
|
Term
Important points about matching |
|
Definition
1. participants are matched only on the matching variable
2. participants across conditions may differ on other important variables
3. these differences may be alternative explanations for studys results ( confounding)
4. the more characteristics a researcher tries to match, the harder it will be to match. |
|
|
Term
What is Natural Group Design |
|
Definition
Psychologusts questions often ask about how individuals differ, and how these individuals differences are related to important outcomes.
-Researchers cant randomly assign participants to these groups ( random assignment to male/ female groups??)
Example
1. Do menand women differ in what they see in intimate relationships
2. Are extraverted iindividuals, compared to introverted individuals, more likely to succeed in buisness? |
|
|
Term
What is Individual Differences ( Subject) variables? |
|
Definition
Characteristics or traits that vary across individuals
Example
physical- sex, race
Social ( demographic)-ethnicity, religious affiliation
Personality- extraversion, intelligence,
Mental health- depression, anxiety, drug abuse |
|
|
Term
When a researcher investigates an independent variable in which the groups (conditions) are formed "Naturally", we say what is used??? |
|
Definition
|
|
Term
Natural groups designs allows researchers to do what??? |
|
Definition
to predict relationships among
1. Individual differences variables
2. Outcomes |
|
|
Term
Repeated Measures design... |
|
Definition
1. each individual participates in each condition of the experiment
- completes the Dv with each condition
-hence " repeated measures"
2. Also called "within subject" design
- entire experiment is conducted "within" each subject. |
|
|
Term
Why should we use a repeated measures design?? |
|
Definition
-Cause there is no need to balance individual differences across conditions of experiment since all participants are in each condition
- fewer participants are needed
- convenient and efficient
- more "sensative"
|
|
|
Term
A sensative experiment ..... |
|
Definition
- cam detect the effect of an independent variable even if the effect is small
- repeated measures designs are more sensative than independent groups designs
("error variation" is reduced) |
|
|
Term
Main disadvantage of repeated measures designs is what?? |
|
Definition
PRACTICE EFFECTS
- people chance as they are tested repeatedly
|
|
|
Term
Practice effects become a potential confounding variable if what? |
|
Definition
|
|
Term
Practice effects must be balanced, or averaged, across conditions... so we use the term
COUNTERBALANCING..... |
|
Definition
the order of conditions distributed practice effects equally across conditions
Example
*1/2 participants do condition A and B
*The remaininig participants to condition B then A
* Conditions A and B then have equivalent practice effects
*practice effects arent eliminated, but they are averaged across the conditions of the experiment. |
|
|
Term
( counterbalancing practice effects)
What are the two types of repeated measures designs?? |
|
Definition
|
|
Term
What is complete design?? |
|
Definition
Practice effects are balanced within each participant in the complete repeated measures design
- each participant experiences each condition several times, using different orders each time
|
|
|
Term
A Complete repeated measures design is used when ... |
|
Definition
Each condition is brief ( ex: simple judgments about stimuli) |
|
|
Term
( complete design)
What are two methods for generating orders of conditions ??? |
|
Definition
1. Block randomization
2. ABBA counterbalancing |
|
|
Term
|
Definition
-a block consists of all conditions ( ex: 4 conditions: A,B,C,D)
-generate a random order of the block ( ACBD)
-participant completes condition A, then C, then B, then D
- Generate a new random order for each time the participant completes tje conditions of the experiment ( DACB, CDBA, ADCB) |
|
|
Term
|
Definition
- used when conditions are presented only a few times to each paticipant
- procedure: present one random sequence of conditions ( ex: DABC), then present the opposite of the sequence ( CBAD)
-each condition has the same amount of practice effects |
|
|
Term
( complete Design/ ABBA counterbalancing)
LINEAR PRACTICE EFFECTS |
|
Definition
participants change in the same way following each presentation of a condition |
|
|
Term
( complete design/ ABBA counterbalancing)
NONLINEAR PRACTICE EFFECTS... |
|
Definition
participants chance dramatically following the administration of a condition |
|
|
Term
Non linear practices creates a confounding... what is it? |
|
Definition
- differences in scores on the DV may not be caused by the IV ( conditions A,B,C)
- differences on DV may be due to different amounts of practice effects associated with each condition. |
|
|
Term
ABBA counterbalancing should not be used....... |
|
Definition
when practice effects are likely to vary or change over time ( nonlinear practice effects)
** use block randomiation instead. |
|
|
Term
ABBA conterbalancing should not be used when anticipation effects can occur because ..... |
|
Definition
- participants develop expectations about which condition will appear next in a sequence
- responses may be influences by expectations rather than actual experience of each condition
-*** if anticipation effects are likely, use block radomization |
|
|
Term
What is Incomplete Design |
|
Definition
- each participant experiences each condition of the experiment exactly once
- practice effects are balanced across participants in the incomplete design as opposed to comlete design which practice effects are balanced within each subject |
|
|
Term
What is the general rule for balanceing practice effects? |
|
Definition
- each condition (ex: A,B,C) must appear in each ordinal position (1st, 2nd,3rd) equally often
-if this rule is followed, practice effects will be
- balanced across conditions
-will not confound the experiment. |
|
|
Term
What are two techniques for balancing practice effects in an incomplete repeated measures design? |
|
Definition
1. All possible orders
2. selected orders |
|
|
Term
( Incomplete Design)
ALL POSSIBLE ORDERS |
|
Definition
1. use when there are four or fewer conditions
2. two conditions (A,B)--> two possible orders: AB, BA
- half of the participants would be randomly assigned to do condition A first, followed by B
-- other half of participants would complete condition B first, followed by A.
3. Three conditions (A,B,C) --> six possible orders:
ABC, ACB, BAC, BCA, CAB, CBA
- participants would be randomly assigned to one of the six orders
and it goes on to 4, 5, and 6 conditions with higher possible orders each time. |
|
|
Term
(incomplete design)
SELECTED ORDERS |
|
Definition
-Select particular orders of conditions to balance practice effects
- each condition appears in each ordinal position exactly once
- each participant is randomly assigned to one of the orders of conditions. |
|
|
Term
(incomplete design)
What are the two methods of selected orders?? |
|
Definition
1. Latin Square
2. Random starting order with rotation. |
|
|
Term
( incomplete design)
What is the procedure for Latin Square????? |
|
Definition
- ranomly order the conditions of the experiment ( ex: (ABCD)
- Number the conditions ( A=1, B=2, C=3, D=4)
|
|
|
Term
the problem of differential trasfer....
|
|
Definition
Repeated measures designs should not be used when differential transfer is possible.
- occurs when the effects of one condition persist and affect participants experience of subsequent conditions
- use independent groups design instead
- asses whether differential transfer is a problem by comparing results for repeated measures design and random groups design. |
|
|
Term
(Comparison of Two Designs)
Differences between repeated measures design and independent groups design |
|
Definition
( Independent variable)
1. Repeated measures: each participant experiences every condition of the IV
2. Indendent groups: each participant experiences only one condition of the IV |
|
|
Term
(Comparison of Two Designs)
What is balanced (averaged) across conditions to rule out alternative explanations for findings? |
|
Definition
1. respeated measures: practice effects
2. Independent groups: individual differences variables |
|
|