Term
|
Definition
• The (scientific) study of human behavior and cognitive processes • What is behavior? o Attitudes o Consumer behavior o Attitudes are work o So much more • Psychology: use of appropriate research methods to study behavior = science |
|
|
Term
|
Definition
|
|
Term
|
Definition
o Basic research – lab research, to test a theory and be able to generalize it to a population, the most common, the basic part is testing theories – trying to control as much as possible o Applied research – to solve a specific situation or identify behavior in a specific situation, a specific question in a context |
|
|
Term
|
Definition
• When an explanation is typically situation specific o When someone else cuts you off they are a bad person o When you do it you just didn’t see them o This is called fundamental attribution error o Confirmation bias • You confirm your bias but you don’t try to disprove what you found o Anecdotal evidence • Doesn’t have the generalization component o Failing to have ANY peer-review process • When you send it out to peers/experts in your field and say that it is good for publication or not • In general it is about did you do the scientific methods to come up with your conclusions o People search for patterns – we want to see patterns out of chaos 3. some examples of pseudoscience • palm reading • people who have UFO sightings that are just based on anecdotes • beliefs that certain races have certain defined characteristics • hypnotism is a pseudoscience for recovered memories • horoscopes |
|
|
Term
Psychology as a science: what makes it a science |
|
Definition
• Five main principles o Principle of determinism • We are trying to determine the relationship between two or more variables; seeks to establish explanations for events o Principle of Parsimony • Seeks the simplest explanation possible • Ockham’s Razor – entities must not be multiplied beyond necessity – don’t try to compound it with complexity o Principle of testability • 1. Relies on testable, 2. falsifiable statements god aliens o Principle of Empiricism • Requires objective observations • Try to reduce the bias as much as possible • typically you need to use two or three people to find out what is going on • or you can make a video of it • but we are not going to just talk about watching a person • we are not going to influence that persons actions or behaviors • ex. Someone’s visual special abilities just you being there is going to change their behaviors • IMP because of validity and reliability of an experiment o Principle of Replicable Results • All principles are repeated (as well as experiments) • If you confirm your hypothesis you should come up with an experiment that would try to falsify it |
|
|
Term
|
Definition
• We are trying to determine the relationship between two or more variables; seeks to establish explanations for events |
|
|
Term
|
Definition
• Seeks the simplest explanation possible • Ockham’s Razor – entities must not be multiplied beyond necessity – don’t try to compound it with complexity |
|
|
Term
o Principle of testability |
|
Definition
• 1. Relies on testable, 2. falsifiable statements god aliens |
|
|
Term
o Principle of Empiricism |
|
Definition
• Requires objective observations • Try to reduce the bias as much as possible • typically you need to use two or three people to find out what is going on • or you can make a video of it • but we are not going to just talk about watching a person • we are not going to influence that persons actions or behaviors • ex. Someone’s visual special abilities just you being there is going to change their behaviors • IMP because of validity and reliability of an experiment |
|
|
Term
o Principle of Replicable Results |
|
Definition
• All principles are repeated (as well as experiments) • If you confirm your hypothesis you should come up with an experiment that would try to falsify it |
|
|
Term
distinguishing science from pseudoscience |
|
Definition
1. Findings published in peer reviewed publications using standards for honesty and accuracy aimed at scientists 2. experiments must be precisely described and be reproducible. Reliable resultes are demanded 3. scientific failures are carefully scrutinized and studied for reasons of failure 4. over time and continued research more and more is learned about scientific phenomena 5. idiosyncratic findings and bulders "average out" and do not affect the actual phenomenon under study 6. scientists convince others based evidence and research findings, making the best case permitted by existing data. old ideas discarded in the light of new evidence 7. scientist has no person stake in specific outcome of a study |
|
|
Term
distinguishing pseudoscience from science |
|
Definition
1. findings disseminated to general public via sources that are not peer reviewed. No pre publcation review for precision or accuracy 2. studies, if any, are vaguely defined and cannot be reproduced easily. results cant be reproduced 3. failures are ignored, minimized, explained away, rationalized, or hidden 4. no underlying mechanisms are identified and no new research is done. no progress is made and nothing concrete is learned 5. idiosncratic findings and blunders provide the only identifiable phenomena 6. attempts to convince based on belief and faith rather than facts. Belief encouraged in spite of facts, not because of them. ideas never discarded, regardless of the evidence
7. serious conflics of interest. Pseudoscientists makes his or her living off of pseudoscientific products or services |
|
|
Term
The Science of Remembering People’s Names |
|
Definition
• In what conditions do people remember names • We want to explain why names are difficult to remember • We know that even important names are not remembered easily o Hypothesize that remembering names requires undivided attention • Repeated observation o People whose attention is not divided remember names o If people had their attention on just the name and no attention on anything else • Replicate o Even with undivided attention some names are not remembered • Hypothesis is revised several times o Conclude that remembering names is not just about attention o Factors include arousal level, length of name, information presented before or after name • Primacy Recency effect • You remember more from the beginning and more from the end than that which is in the middle |
|
|
Term
|
Definition
• You remember more from the beginning and more from the end than that which is in the middle |
|
|
Term
|
Definition
• The goal of science is to develop general laws – we want to find out what is true – great knowledge and truthful knowledge that will help us understand human behavior • Only the scientific methods can lead to general laws because it is the only way to falsify an idea • Even in the scientific design you have to have certain scientific methods that determine cause and effect • The scientific method consists of four steps o Observing a phenomenon o Formulating a testable explanation (hypotheses) o Further observing and experimenting o Refining and retesting explanations • The scientific method can be tedious and time consuming |
|
|
Term
• The scientific method consists of four steps |
|
Definition
o Observing a phenomenon o Formulating a testable explanation (hypotheses) o Further observing and experimenting o Refining and retesting explanations |
|
|
Term
|
Definition
Step one: Scientific method: OBSERVING A PHENOMENON • While observing a phenomenon, you identify the VARIABLES that appear important in explaining behavior o This is in a way inductive reasoning • Variable = anything that can have two or more values |
|
|
Term
formulating tentative explanations |
|
Definition
Step 2: FORMULATING TENTAITVE EXPLANATIONS • Initial observations all you to develop a HYPOTHESIS, or tentative statement, about the relationships among the variables identified |
|
|
Term
further observing and experimenting |
|
Definition
step 3: You carry out more detailed OBSERVATIONS of the behavior of interest o These observations are directed at testing your hypothesis |
|
|
Term
refining and retesting explanations |
|
Definition
step 4:• Supported hypotheses are often REFINED and subjected to further exploration o Disconfirmation hypotheses may be reworked and RETESTED |
|
|
Term
steps of the research process |
|
Definition
1. developing an idea and a hypothesis 2. choosing an appropriate research design 3. choosing an appropriate subject pool 4. deciding on what to observe 5. conducting a study 6. analyzing data 7. reporting results |
|
|
Term
|
Definition
its going to try to test some overarching theory and is generalizable to most people • When it says “to test a hypothesis” it indicates that it |
|
|
Term
|
Definition
do research in a specific context, the research is geared toward a specific situation, usually asks for a specific group of people |
|
|
Term
What is a scientific theory |
|
Definition
• You can’t observe theory of mind directly, you have to infer it o Typically told as a metaphor or “story” (verbal explanation) or analogies can be used o It involves multiple variables o Domain can be wide or narrow (can be the whole population or a specific population) o A theory is not absolute |
|
|
Term
• Theory of general intelligence (g) |
|
Definition
o Cannot observe directly (inferred) o Collect data on multiple abilities (observe) • Memory, how people process things, how you can rotate mental objects o We cannot measure intelligence directly o To try to determine important variables that contribute to intelligence (partially verified) • You always want to try to identify a theory that you are directed toward |
|
|
Term
|
Definition
• Theory is not a hypothesis • Hypothesis is an educated guess o Involves 2 or more variables o You want to know the relationship between the two variables (if x happens does y happen) o Stated as being testable o Stated as being falsifiable (principal of testability) |
|
|
Term
|
Definition
• More definitive o Substantial evidence o More “absolute” than a theory o Theories explain laws • Example o Fechner’s law • As we experience more of a srimukys psychphysics • Law of closure |
|
|
Term
|
Definition
• General -> specific • Models are represented o Equation o Text o Diagram |
|
|
Term
|
Definition
• Mechanistic explanation o Describes mechanisms underlying behavir o Describes how behavior works • Functional explanation o Describes an attribute in terms of its function o Describes why behavior exists • Mechanistic explations are preferred Mechanistic is preferred because when it comes to why we can choose pretty much whatever we want |
|
|
Term
• Mechanistic explanation |
|
Definition
o Describes mechanisms underlying behavir o Describes how behavior works Mechanistic is preferred because when it comes to why we can choose pretty much whatever we want |
|
|
Term
|
Definition
o Describes an attribute in terms of its function o Describes why behavior exists |
|
|
Term
|
Definition
• Describes relationship among variables • Weak form of a theory • Example: o Arousal and performance o Efficiency of memory is one variable the other is level of arousal or stress o If you start to become too aroused or stressed you will start to decrease in memory o This describes a relationship, but it doesn’t tell us why it is or how it is or situations where it may not be true o Doesn’t give us too much information |
|
|
Term
|
Definition
• Relationship among variables explained through an analogy • One positive aspect: Provides good organizational scheme o If you already have something, and it is usually mechanical, you have something to help explain it o Allows us to collect data o Allows us to predict unexpected outcomes • Analogies are limited o Example • Brain as a computer • Computer doesn’t have emotions • It doesn’t form memory on its own |
|
|
Term
|
Definition
• Theory with unique structure • It is not based on an analogy • It is going to explain relationship among variables • Uncommon in psychology o One of the reasons is that people vary a lot o Consistency among people is rare o We all have different culture, language, religion, emotion, genetic predispositions • Example: cognitive dissonance theory o You see it consistently when you test it on different people across all theses different factors • Accounts for things such as buyer’s remorse You think that you want it, but then when you finally achieve it you feel bad about having it • Criticizing what is unattainable Even though you still really want it you couldn’t have it so you criticize |
|
|
Term
What “makes” a good theory (these aren’t linear) |
|
Definition
• Accounts for data that has been collected • Explanatory relevance – it can explain events – doesn’t explain it away – it explains what happened and how it happened • Testable (also falsifiable) • Predicts novel events • Parsimony – it doesn’t have a lot of assumptions, if a theory has an assumption someone has to make a logical argument as to why that theory has to be accepted o Example of parsimony: general intelligence (one overarching intelligence) • That it is something we can actually observe through cognitive abilities |
|
|
Term
|
Definition
• Theory is tested, modified and tested again o Continues until theory adequately accounts for behavior • Several alternative explanations can be tested with an experiment o Some alternatives will be ruled out o New experiment tests remaining alternatives • Works only when alternative explanations generate well-defined predictions |
|
|
Term
Theory Driven Research versus Data Driven Research |
|
Definition
• The quest for theories of learning once drove psychological research • Learning theories became very complex (not parsimonious) • Right time for grand theories? • Some researchers argued for an atheoretical approach o Focus on functional relationships among variable o Start to collect some data and figure out how they are related • You either find a theory and find data that supports it or • You find data and find a theory that best supports it data driven would be between theory and hypothesis |
|
|
Term
|
Definition
• Observations>Patterns>Tentative hypothesis>theory • We make observations all the time and then come up with theories • We can go out and systematically make observations |
|
|
Term
|
Definition
• Theory>hypothesis>observation>confirm/disconfirm hypothesis • We will focus on hypothesis through confirmation/disconfirmation of hypothesis |
|
|
Term
sources of research ideas |
|
Definition
be curious! be skeptical! be creative! personal conversations, experiences, and personal interests unsystematic observation systematiic observation newspapers, magazines, television peer reviewed journals |
|
|
Term
developing good research questions |
|
Definition
ASK ANSWERABLE qUESTIONS research questions must be framed as so they can be answered using the scientific method ask empircal questions can be answered via objective observation must be able to OPErATIONALLY DEFINE variables defining a variable in terms of the operations required to measure it questionable, testable research hypothesis |
|
|
Term
Characteristics of good research questions |
|
Definition
• The topic is interesting o Remain engaging for long periods of time • The topic is researchable o Be able to operationalize concepts • the topic is significant o Add to a body of knowledge • The topic is manageable o Fit level of skill and available resources • The topic is ethical o Does not harm or embarrass participants |
|
|
Term
|
Definition
• Focus on variables that have small effects • Already have firmly established answers • Focusing on variables that have no theoretical interest • Focusing on variables that you have no good reason to believe are related |
|
|
Term
Reasons for Reviewing the Literature |
|
Definition
• Avoid needless dplication of effort • Getting ideas about variables o Design o Materials and procedures • Keeping up to date o Empirical o Theoretical issues |
|
|
Term
primary vs secondary sources |
|
Definition
a PRIMARY SOURCE includes full report includes methodological details SECONDARY SOURCE summarizes primary these should be used sparingly incomplete biased inaccurate |
|
|
Term
|
Definition
• General textbooks and specialized professional publications Anthologies o May not represent the entire field, but rather editor’s bias • Useful in early stages of literature search • Books that summarizes are secondary sources • Books should be used with caution o May not undergo rigorous process of review information may not be up to date |
|
|
Term
|
Definition
• Provide current research and theory • Papers submitted to a refereed journal undergo peer review • Usually they are reviewed by 2 or 3 experts • Papers submitted to a nonrefereed journal do not undergo peer review o You should prefer referreed over nonrefereed sources • Criteria to evaluate quality of a journal o Consulting journals in psychology o Consulting the social science citations Index o Using the method of authority (asking a trusted authority about the quality of a journal) |
|
|
Term
|
Definition
sober and serious references always provided author is a scholar in the field language is geared toward scholars the content is original research publisher: many by professional organization |
|
|
Term
|
Definition
appearance: attractive with photographs sometimes cited author is scholar, editorial staff, or freelance language: for educated, no specialty content: no original research publisher: commercial or professional organization |
|
|
Term
|
Definition
appearance: attractive with many photos reference citations: rarely provided author: wide range language: simple, foor less educated contest: sources mentioned, may obscure publisher: commercial to entertain |
|
|
Term
|
Definition
appearance: newspaper format references: obscure reference author: wide range language: elementary for gullible audience content: pseudo scientific sources publisher: commercial to arouse curiosity |
|
|
Term
Library research: the basic strategy |
|
Definition
-use reference section from a textbook or other book -use the reference section from the article to find other articles -repeart the first two steps for each article identified until you can find no -use search engines for most recent articles -repear the entire process as you find more recent |
|
|
Term
|
Definition
-articles reviewed by experts in the field +usually blind reviewers +indtend to ensure quality research is published -the process is time honored but has problems +personal feelings of reviewers can bias review +agreement with reviewer's view enhances chances of publication +low rates of inter-reviewer agreement |
|
|
Term
the role of values in science |
|
Definition
-science and scientists are not value free +culture, politics, and personal values -values can influence research in several ways +practices: how research is done affects integrity of findings +questions: which research questions are addressed and which are ignored +data: how data are interpreted (e.g., using value-laden terms +Specific Assumtions: basic assumptions in science (influences inferences made) +global Assumptions: Nature and character of research conducted in an area -values affect science when research findings re translated into "what out to be" |
|
|
Term
|
Definition
basic assumptions in science -influences inferences made |
|
|
Term
|
Definition
nature and character of research conducted in an area |
|
|
Term
|
Definition
-a literature review may give you a good general idea for research -next, you must develop a testable hypothesis your hypothesis should flow logically from your research literature sources -hypothesis development drives other important decisions (e.g. variables included in a study) -a hypothesis should verbally connect two variables and state the relationship between them -after developing your hypothesis, you test it |
|
|
Term
Ethical issues raised by the Watson and Rayner study |
|
Definition
Permission of Albert’s mother was most likely not obtained No informed consent obtained Is conditioning fear into a young child ethical? Watson and Rayner did not have opportunity to reverse the effects of the procedure |
|
|
Term
|
Definition
• Usually under aggression or sexuality • Heat of the Moment: The effect of sexual arounsal on sexual decision making o Took a laptop home and asked to masturbate while looking at pictures o Then they are asked a question like: would you encourage a date to drink o Then they give them another laptop, but this time it does not have the pictures o The fact that they are making the decisions differently is really important |
|
|
Term
Evolution of Ethical Guidelines for Research |
|
Definition
• Present day ethical codes evolved slowly over the 20th century • Because of what the Nazis were doing they said we really need to get a hold of it • Several documents specified the parameters of ethical research practice Nuremberg code Declaration of Helsinki Belmont Report APA ethical guidelines |
|
|
Term
The 10 points of the Nuremberg Code |
|
Definition
• Participation must be totally voluntary (cant be against someone’s will) o People should have the capacity to give consent to participate o People should be fully informed of the purposes of the experiment • Research should yield results that are useful to society o Cannot be obtained in any other way (discussion, thinking it out, observations – because anyone can do that • Research should have a sound footing in animal research o Not as true today o It was meant so that any risks that there would be for humans we would see it on animals first • Avoid unnecessary physical or psychological harm to subjects • Research should NOT be conducted if there is reason to believe that death or disability will occur • Risk in the research should be proportional to the benefits, the benefits should actually outweigh it • Proper plans should be made to protect people against harm o They are talking about plans need to be made o Are you going to debrief the people – yes you have to • Research should be conducted by highly qualified scientists only • People should have the freedom to withdraw from the experiment if they feel uncomfortable o There should not be any penalty o Can do so at any time o No penalty or negative consequence as a result • The researcher must be prepared to discontinue the experiment o (the Stanford test was stopped after 6 days even though it was planned for 20) |
|
|
Term
|
Definition
• 200 black people with syphilis and people without • they were told they were being they were being cured, even though they weren’t • even when the cure came out they didn’t treat them • you have men who have syphilis, and you are trying to see if the drug works on these people or not • then you find penicillin • then you don’t give them the cure • gets you: |
|
|
Term
|
Definition
• 1964 declaration by the World Medical Association • addressed several issues o health and welfare of human research participants come first o all medical research must conform to accepted scientific principles o all medical research must be based on knowledge of relevant scientific literature o research must be reviewed by an independent group to ensure adherence to ethical standards • aka it should not be reviewed by pharmacists or friend • many of the principles of the declaration are relevant to research in the social sciences |
|
|
Term
|
Definition
• Issued in 1979 • Further clarified ethical issues for researchers • Three basic principles o Respect for persons • Research participants must be treated as autonomous individuals capable of making decisions • Persons with diminished capacity deserve special protection • Research participants must be volunteers and be fully informed o Beneficence • Compensation – you are deceiving the person by making the benefits too high • The well-being of research participants must be protected • Do no harm to research participants • Maximize benefits of research while minimizing harm o Justice • The researcher and participant share equally in the costs and benefits of research That is if it is good for the generalizable population then it is good for both • Precludes using certain populations for research if they will have difficulty refusing participations Prisoners, children, mental patients |
|
|
Term
o Respect for persons (Belmont) |
|
Definition
• Research participants must be treated as autonomous individuals capable of making decisions • Persons with diminished capacity deserve special protection • Research participants must be volunteers and be fully informed |
|
|
Term
|
Definition
• Compensation – you are deceiving the person by making the benefits too high • The well-being of research participants must be protected • Do no harm to research participants • Maximize benefits of research while minimizing harm |
|
|
Term
|
Definition
• The researcher and participant share equally in the costs and benefits of research That is if it is good for the generalizable population then it is good for both • Precludes using certain populations for research if they will have difficulty refusing participations Prisoners, children, mental patients |
|
|
Term
THree basic principles for ethical research: Belmont report |
|
Definition
respect for persons, beneficence, justice |
|
|
Term
Powerpoint: Ethica rules derived from Belmont report principles |
|
Definition
Rules flowing from respect for persons Obtain and document informed consent Respect the privacy of research participants Employ special protections for participants who have limited autonomy (e.g., prisoners)
Rules flowing from beneficence Use the least risky research methods possible Potential risks to the participants must be balanced against the potential benefits of the research Fulfill the promise to maintain participant confidentiality Monitor research carefully that involves more than minimal risk of harm to participants Rules flowing from principle of justice Treat research participants equitably Avoid exploiting vulnerable populations |
|
|
Term
|
Definition
The APA began drafting its ethical code in 1947 First APA ethical code accepted in 1953 The most recent version is the Ethical Principles of Psychologists and Code of Conduct 2002 |
|
|
Term
Summary of APA Ethical Guidelines |
|
Definition
Research proposals submitted to an IRB must be accurate Informed consent shall fully inform participants of the nature and requirements Informed consent shall fully inform participants of the parameters of the study Special steps must be taken to protect research participants in a subordinate position Informed consent may be waived under certain circumstances Excessive inducements for participation are to be avoided Deception is allowed if no other alternative is available Participants must be given timely feedback about the nature, results, and conclusions of the research |
|
|
Term
The institutional Review Board |
|
Definition
Research proposals must be screened by an IRB BEFORE research is started Individuals with no vested interest in the research review the research IRB ensures that participants are treated in accord with ethical rules Risk-benefit ration is assessed IRB review serves two important functions (Fisk, 2009): It ensures that research meets ethical standards It protects researchers from liability arising from research participation IRBs work well when they adhere to two principles (Fisk, 2009): Acting to protect human participants from ethical violations Helping to educate and train staff about ethical issues Improves communication between IRB and researcher |
|
|
Term
IRB two important functions |
|
Definition
It ensures that research meets ethical standards It protects researchers from liability arising from research participation |
|
|
Term
IRB works well when they adhere to two principles |
|
Definition
Acting to protect human participants from ethical violations Helping to educate and train staff about ethical issues Improves communication between IRB and researcher |
|
|
Term
treating science ethically |
|
Definition
Researchers also obligated to treat science ethically This is embodied in Section C of the APA ethical code: Psychologists seek to promote accuracy, honesty, and truthfulness in the science, teaching, and practice of psychology. In these activities psychologists do not steal, cheat, or engage in fraud, subterfuge, or intentional misrepresentation of fact. (APA, 2002) The APA code must not be taken lightly Fraudulent research harms scientists, participants, the public and science as a whole Public confidence in science can be eroded because of research fraud The Office of Research Integrity (ORI) of the U.S. Department of Health and Human Services investigates allegations of research fraud |
|
|
Term
|
Definition
The Office of Research Integrity (ORI) of the U.S. Department of Health and Human Services investigates allegations of research fraud |
|
|
Term
internet research and ethical issues |
|
Definition
Internet research has special ethical issues How do ethical guidelines developed for traditional research settings apply to Internet research? Many traditional ethical principles have direct applications to Internet research Three areas where Internet research poses special problems Informed consent Privacy and confidentiality The use of deception |
|
|
Term
internet research and ethical issues: 1st |
|
Definition
Informed consent issues Most relevant for Internet research on participants in chat rooms or other interaction sites Is it ethical to exclude someone from a chat, especially if it is one dealing with a problem, because a person does not want to participate in research? Generally, chat room participants do not take kindly to being in a study Even if you can get consent from some, you may have a biased sample |
|
|
Term
internet research and ethical issues: 2nd |
|
Definition
Privacy and confidentiality issues Because the Internet is so public it poses special problems for privacy and confidentiality Anonymity of chat room participants may be compromised Special steps must be taken to insulate confidential data from hackers Researchers who enter chat rooms and don’t identify themselves as researchers raise ethical issues |
|
|
Term
internet research and ethical issues: 3rd |
|
Definition
Deception in Internet research When deception is used the researcher has a special obligation to research participants Participants should be debriefed about deception Participants should be Dehoaxed by convincing them that deception was necessary and take steps to reverse any ill effects of being deceived Debriefing may not be as easy on the Internet If participants leave a group before study is over it may be impossible to track them down for debriefing You may have to set up a special group or “enclave” where participants can go for debriefing |
|
|
Term
Three guidelines for research using Internet groups |
|
Definition
Learn about and respect the rules of the Internet group that you are going to study Is the group open or private? Edit data collected Remove names, pseudonyms, and group names that could identify participants Use multiple groups Studying multiple groups increases generality of results and adds a layer of protection for participants |
|
|
Term
|
Definition
Outright fabrication of data (most harmful, but rare) Altering data to make them “look better” Selecting only the best data for publication Using the “least publishable unit” rule Deriving several publications out of a single study Sabotage of others’ work Claiming credit for work done by others Attaching your name to a study you had little to do with |
|
|
Term
Functions of a Research design |
|
Definition
Two activities of scientific study Exploratory data collection and analysis Classifying behavior Identifying important variables Identifying relationships among variables Hypothesis testing Evaluating explanations for observed relationships Begins AFTER enough information collected to form testable hypotheses |
|
|
Term
Exploratory data collection and analysis |
|
Definition
Classifying behavior Identifying important variables Identifying relationships among variables |
|
|
Term
|
Definition
Evaluating explanations for observed relationships Begins AFTER enough information collected to form testable hypotheses |
|
|
Term
correlational relationship |
|
Definition
Changes in one variable accompany changes in another Covariation ≠ one variable causes change in another |
|
|
Term
|
Definition
One variable directly or indirectly causes changes in another Can be unidirectional Can be bidirectional |
|
|
Term
correlational research: major features |
|
Definition
No independent variables are manipulated Two or more dependent variables are measured Relationship is established Correlational relationships can be used for predictive purposes A PREDICTOR VARIABLE can be used to predict the value of a CRITERION VARIABLE Correlational research Cannot be used to establish causal relationships among variables |
|
|
Term
predictor and criterion variables |
|
Definition
Pred: intelligence CRIT: academic achievement PRED: attention CRIT: Recall PRED: stress Crit: Performance at work PRED: social support CRIT: Depression |
|
|
Term
Reasons why you should NOT infer causality |
|
Definition
Third-variable problem Potential unmeasured variable Actual cause in observed behavior Directionality problem Direction of relationship unspecified Thus, cause is unknown |
|
|
Term
correlational research: when is it used? |
|
Definition
When gathering data in the early stages of research When manipulating an independent variable is impossible or unethical When you are relating two or more naturally occurring variables |
|
|
Term
Experimental research: Major features |
|
Definition
An independent variable is manipulated (with at least two levels) Value of the independent variable determined by the researcher Manipulating an independent variable Exposing participants to at least two values or levels of the variable Specific conditions associated with each level constitute the treatments of the experiment A dependent variable is measured Also called a dependent measure Variable whose value you observe and measure in an experimental design Value determined by participant’s behavior Goal: show a causal relationship between the values of the independent and dependent variables The most basic experiment consists of an experimental and a control group Experimental group: Participants receive experimental treatment Control group: Participants do not receive experimental treatment Serves as baseline of behavior Control group Hold participants constant Randomize participants exposure across treatments using random assignment over extraneous variables Causal relationship between the independent and dependent variables can be established |
|
|
Term
The most basic experimental research |
|
Definition
Experimental group: Participants receive experimental treatment Control group: Participants do not receive experimental treatment Serves as baseline of behavior Control group Hold participants constant Randomize participants exposure across treatments using random assignment over extraneous variables Causal relationship between the independent and dependent variables can be established |
|
|
Term
Strength and Limitations of Experimental Research |
|
Definition
Strength Identification of causal relationships among variables Not possible with correlational research Limitations Can’t use experimental method if you cannot manipulate variables Ethical issues Technological limitations Tight control over extraneous variables limits generality of results Tradeoff exists between tight control and generality |
|
|
Term
|
Definition
o The degree to which your design tests what it was intended to test • You will possibly have to bring this us in our papers • Experiment- variation in the dependent variable is caused only by variation in the independent variable • Correlational – changes in the value of the criterion variable are solely related to changes in the value of the predictor variable o Threatened by extraneous variables • Variables cannot control • Undesirable effects on outcome of experiment – because it creates errors, but when people come in with these variability it screws with your study Must be considered during the design phase of research |
|
|
Term
factors affecting internal validity |
|
Definition
history-eventy may occur between multiple observations maturation-participants may become older or fatigued testing-taking a pretest can affect results of a later test instrumentation - changes in instument calibration or observers may change results statistical regression-subjects may be selected based on extreme scores biased subject selection - subjects may be chosen in a biased fashion experimental mortality - differntial loss of subjects from groups in a study occure |
|
|
Term
external validity and threats to it |
|
Definition
Degree to which results generalize Beyond your sample Beyond research setting Threats to external validity Using a highly controlled laboratory setting Using restricted populations Using pretests Presence of demand characteristics and experimenter bias Subject selection bias |
|
|
Term
Factors affecting external validity |
|
Definition
reactive testing - a pretest may affect reactions to an experimental variable interactions between selection biases and the independent variable -Results may apply only to subjects representing a unique group Reactive effects of experimental arrangements-Artificial experimental manipulations or the subject’s knowledge that he or she is a research subject may affect results. Multiple treatment interference-Exposure to early treatments may affect responses to later treatments |
|
|
Term
internal versus external validity |
|
Definition
Increase in internal validity may decrease external validity and vice versa Internal validity may be more important in basic research External validity may be more important in applied research Must be considered when designing a study |
|
|
Term
|
Definition
The laboratory setting Affords greatest control over extraneous variables Higher levels of internal validity Lower levels of external validity Simulations Attempt to recreate the real world in the laboratory Realism is an issue Mundane realism: How well does a simulation mimic the real world event being simulated Experimental realism: How engaging is the simulation for participants The field setting Study conducted in a real world environment Field experiment: Manipulate variables in the field High degree of external validity Internal validity may be low Difficult to control extraneous variables |
|
|
Term
|
Definition
The field setting Study conducted in a real world environment Field experiment: Manipulate variables in the field High degree of external validity Internal validity may be low Difficult to control extraneous variables |
|
|
Term
|
Definition
The laboratory setting Affords greatest control over extraneous variables Higher levels of internal validity Lower levels of external validity Simulations Attempt to recreate the real world in the laboratory Realism is an issue Mundane realism: How well does a simulation mimic the real world event being simulated Experimental realism: How engaging is the simulation for participants |
|
|
Term
|
Definition
How well does a simulation mimic the real world event being simulated |
|
|
Term
|
Definition
How engaging is the simulation for participants |
|
|
Term
conducting observational research |
|
Definition
Purely observational research has two features Correlational designs that do not involve manipulating independent variables Use trained observers |
|
|
Term
quantitative versus qualitative data |
|
Definition
Quantitative data Quantifying behavior numerically (use of scales) Apply a wide range of statistics to quantitative data Not all behavior can be recorded quantitatively Qualitative data Making careful notes during observation periods No numbers generated nor are counts of behavior made Qualitative record analyzed for themes Specialized techniques needed to analyze data Either or both types of data can be recorded if necessary |
|
|
Term
|
Definition
Quantifying behavior numerically (use of scales) Apply a wide range of statistics to quantitative data Not all behavior can be recorded quantitatively |
|
|
Term
|
Definition
Making careful notes during observation periods No numbers generated nor are counts of behavior made Qualitative record analyzed for themes Specialized techniques needed to analyze data |
|
|
Term
developing behavioral categories |
|
Definition
Behavioral categories General and specific classes of behavior to be observed Also called coding schemes Categories must be operationally defined - wht are your criteria that match those behaviors Behavioral categories must be clearly defined Begin with clear goals for research Clearly define all hypotheses Keep categories as simple as possible Avoid temptation to accomplish too much in one study Make preliminary observations of behavior Become familiar with behavior observed Create a list of behaviors of interest that can be condensed later Conduct a literature search Find research that used behavioral categories You might find “perfect” categories Can adapt categories for your research Take the time to carefully construct your categories Adjustments easier before than during your research |
|
|
Term
|
Definition
Behavioral categories General and specific classes of behavior to be observed Also called coding schemes - but you dont use numbers |
|
|
Term
quantifying behavior in observational research |
|
Definition
Frequency Method Record the frequency of a behavior Usually within a time period Duration Method Record how long a behavior lasts Intervals Method Divide the observation period Several discrete time intervals Example is ten 2-minute intervals Record whether a behavior occurs within each interval |
|
|
Term
|
Definition
Record the frequency of a behavior Usually within a time period |
|
|
Term
|
Definition
Record how long a behavior lasts |
|
|
Term
|
Definition
Divide the observation period Several discrete time intervals Example is ten 2-minute intervals Record whether a behavior occurs within each interval |
|
|
Term
recording single events versus behavior sequences |
|
Definition
Traditional observational research Record single events in an observation Behavior sequences can also be recorded Record behaviors occurring sequentially Pro Provides a more complex picture of behavior “Thicker” descriptions Con Requires more effort But . . . data yielded is worth the effort |
|
|
Term
coping with complexity in observational research |
|
Definition
Time Sampling Scan subjects for a specific period Limit to 60 seconds Record observations during the next period Works best with two or more observers Individual Sampling Select a subject Observe behavior for a given period Shift to another subject and repeat Event Sampling Select one behavior for observation Record all instances of that behavior Best if one behavior can be specified as being more important than others Recording Video camera Audio recorder |
|
|
Term
|
Definition
Scan subjects for a specific period Limit to 60 seconds Record observations during the next period Works best with two or more observers |
|
|
Term
|
Definition
Select a subject Observe behavior for a given period Shift to another subject and repeat |
|
|
Term
|
Definition
Select one behavior for observation Record all instances of that behavior Best if one behavior can be specified as being more important than others |
|
|
Term
|
Definition
Multiple observers Efficiency and workload Increase reliability of observations Must establish reliability of observations Interrater reliability |
|
|
Term
Methods of ealuating interrater reliability |
|
Definition
Percent agreement Cohen’s Kappa Pearson Product-Moment Correlation |
|
|
Term
|
Definition
interrater reliability: Simplest method (total agreements/total observations)*100 Percent agreement should 70% or better Percent agreement may underestimate or overestimate agreement |
|
|
Term
|
Definition
interrater reliability Assesses the amount of agreement actually observed relative to the amount of agreement that would be expected by chance A Cohen’s Kappa of .70 indicates acceptable reliability |
|
|
Term
Assesses the amount of agreement actually observed relative to the amount of agreement that would be expected by chance A Cohen’s Kappa of .70 indicates acceptable reliability |
|
Definition
Correlate ratings of multiple observers with Pearson r Simple and easy method to evaluate interrater reliability Two sets of scores may correlate highly, but may still differ markedly |
|
|
Term
Dealing with data from multiple observers |
|
Definition
Must deal with the disagreement Methods used to handle disagreements Average across observers Have observers meet to resolve disagreements Designate one observer as the “main observer” and the other a “secondary observer” Make selections before you begin your research Data from main observer used as data for analysis Data from secondary observer used to establish reliability |
|
|
Term
|
Definition
Pk = (Pc - Po)/ 1 - Pc Po = (A1 + A2)/observation total Pc = ((C1 * R1) + (C2 * R2))/ total squared |
|
|
Term
|
Definition
non experimental Unobtrusive observations of subjects’ Record naturally occurring behavior Habituation can be used if you cannot remain hidden Can use indirect measures of behavior Advantage: Behavior not tainted by artificial setting Disadvantage: Purely descriptive, cannot infer causality |
|
|
Term
|
Definition
The researcher becomes immersed Becomes part of behavioral or social system May be conducted as a participant or non-participant observation study Participant-observer Observer-participant |
|
|
Term
Nonexperimental approaches to data collection, factors |
|
Definition
A number of factors must be considered Gaining entry to field setting Gaining entry into group to be studied Making yourself “invisible” in the group Making observations and recording behavior How to analyze data |
|
|
Term
|
Definition
Applies statistical techniques to evaluate a body of literature Combine or compare studies Quantitative way of summarizing research literature Three steps involved in doing a meta-analysis Identifying relevant variables Locating research to review Conducting the meta-analysis |
|
|
Term
|
Definition
Identifying relevant variables Locating research to review Conducting the meta-analysis |
|
|
Term
draw backs to meta analysis |
|
Definition
Quality of research varies across different journals Research from different journals may have to be differentially weighted No agreement on how research should be weighted Studies may have used very different methodologies May not be a serious problem Different methodologies analogous to different participants in a study Practical problems Incomplete information Imprecise information |
|
|
Term
meta - analysis versus traditional view |
|
Definition
Meta-analysis yields different conclusions More likely to detect differences Larger effects of variables detected No difference in evaluating methodology May be better to base decisions about effects of variables |
|
|
Term
|
Definition
Observations = no measures given Field survey measures Assess attitudes and/or behavior Survey research is purely correlational No causal inferences can be drawn Can use survey research to predict behavior Criteria to conducting survey research Anonymity Confidentiality |
|
|
Term
designing a questionairre |
|
Definition
First step Clearly define your topic Yields unambiguous responses Clear operational definitions for behaviors measured Keeps questionnaire focused on topic Avoid doing too much Collect demographic information Age, gender, religion, race, etc. Often used as predictor variables Items measuring target behavior or attitudes Often used as criterion variables |
|
|
Term
|
Definition
Respondents are asked to answer a question in their own words Drawbacks to open-ended items Respondent may not understand what you are asking Difficult to summarize and analyze |
|
|
Term
Restricted (closed-ended) |
|
Definition
Respondents are given a list of alternatives Alternatives can be ordered or unordered Gives you control over participant’s responses |
|
|
Term
|
Definition
“Other” alternative Respondent writes in an alternative |
|
|
Term
|
Definition
Respondents use number on a scale (e.g., 0 to 10) Best reflects their opinions Two factors need to be considered Number of points on the scale How to label the scale (e.g., endpoints only or each point) |
|
|
Term
|
Definition
Five-point scale used to assess attitudes Respondents indicate the degree of agreement or disagreement Used in experimental research as well as survey research |
|
|
Term
writing good items for the survey |
|
Definition
Use simple words Make the stem of a question short and easy to understand Avoid vague questions Don’t ask for too much information in one question Avoid “check all that apply” items Avoid questions that ask for more than one thing Soften impact of sensitive questions |
|
|
Term
assembling your questionaire |
|
Definition
Organize questions into a coherent, visually pleasing format Do not present demographic items first Use an interesting question as your first item Keep related items together (continuity) Be aware that question order can make a difference Place sensitive or objectionable items after less sensitive/objectionable items Establish a logical navigational path |
|
|
Term
administering your questionnaire |
|
Definition
mailing, internet telephone group administration interview |
|
|
Term
|
Definition
Questionnaire mailed directly to participants Very convenient Nonresponse bias a serious problem Results in an unrepresentative sample May reduce nonresponse bias by: Making multiple contacts with respondents Include a small token of your appreciation |
|
|
Term
internet survey distribution |
|
Definition
Survey distributed via e-mail or on a Web site E-mail survey best for relatively short surveys Web survey allows for more complex navigational paths Large samples can be acquired quickly Nonresponse bias must still be considered Biased samples are possible Uneven computer ownership Results from Internet survey comparable to other methods True for most nonsensitive applications Use caution when doing Internet surveys on controversial issues |
|
|
Term
|
Definition
Participants are contacted by telephone and asked questions directly Can be done by a live researcher or by using Interactive Voice Response (IVR) technology Questions must be asked carefully The plethora of “junk calls” make participants suspicious |
|
|
Term
|
Definition
Questionnaire distributed to a group of participants (e.g., a class) Completed by participants at the same time Ensuring anonymity may be a problem |
|
|
Term
|
Definition
Participants asked questions face-to-face Structured or unstructured format Characteristics or behavior of the interviewer may affect the participants’ responses Examples? |
|
|
Term
evaluating the reliability of your survey |
|
Definition
Test-Retest Reliability Parallel Form Reliability Split-Half Reliability Kuder-Richardson Formula (KR20) |
|
|
Term
|
Definition
Requires multiple administrations of a test Attention must be paid to the inter-test interval Especially problematic if Ideas being measured fluctuate Participants likely to recall responses from earlier tests Your questions are long and boring |
|
|
Term
Parallel Form Reliability |
|
Definition
Essentially the same as test-retest reliability Alternate form of the test administered the second time |
|
|
Term
|
Definition
Reliability assessed with one administration of a test Items from one half of a test are correlated with items from the second half of a test An odd-even split is the best way to split a test |
|
|
Term
Kuder-Richardson Formula (KR20) |
|
Definition
Reliability assessed with one administration of a test Formula estimates average for all possible split-half reliabilities A KR20 of at least .75 indicates moderate reliability |
|
|
Term
increasing the reliability of a questionaire |
|
Definition
Increase number of items on your questionnaire Standardize the conditions under which the test is administered Timing procedures, lighting, ventilation, instructions Score your questionnaire carefully Train scorers carefully Eliminate scoring errors Make sure items are clearly written Appropriate for those who will complete your questionnaire |
|
|
Term
assessing the quality of your questionnaire |
|
Definition
You must assess the validity of your questionnaire Validity assessed with variety of methods Content validity Construct validity Criterion-related validity Concurrent validity Predictive validity Validity can be affected by a number of factors Method of administration, unclear questions |
|
|
Term
Acquiring a survey sample |
|
Definition
You should obtain a representative sample Sample closely matches the characteristics of the population Biased sample Your sample characteristics don’t match population characteristics Biased samples produce misleading or inaccurate results Usually results from inadequate sampling procedures |
|
|
Term
|
Definition
simple random sampling stratified sampling proportionate sampling systematic sampling cluster sampling multi sampling nonrandom sampling |
|
|
Term
|
Definition
You should try to select an economic sample Includes enough respondents to ensure a valid survey and no more Two factors are taken into account when determining necessary sample size Amount of acceptable sampling error Expected magnitude of population proportions There is a formula that is used to calculate sample size using the above parameters |
|
|
Term
|
Definition
Randomly select a sample from the population Random digit dialing is a variant used with telephone surveys Reduces systematic bias, But does not guarantee a representative sample Some segments of the population may be over- or underrepresented |
|
|
Term
|
Definition
Used to obtain a representative sample Population is divided into demographic strata A random sample of a fixed size is drawn from each stratum May still lead to over- or underrepresentation of certain segments of the population |
|
|
Term
|
Definition
Same as stratified sampling Proportions of different groups in the population are reflected in the samples from the strata |
|
|
Term
|
Definition
Used in conjunction with stratified sampling Every kth element is sampled after a randomly selected starting point Sample every fifth name in the telephone book after a random page and starting point selected, for example |
|
|
Term
|
Definition
Used when populations are very large The unit of sampling is a group (e.g., a class in a school) rather than individuals Groups are randomly sampled from the population (e.g., ten classes from a particular school) |
|
|
Term
|
Definition
First, identify large clusters (e.g., school districts) and randomly sample from that population Second, sample individuals from randomly selected clusters Can be used along with stratified sampling to ensure a representative sample |
|
|
Term
Nonrandom sampling revisited |
|
Definition
Surveys sometimes have nonrandom samples Surveys done on a college campus Internet survey May limit generality of survey results |
|
|
Term
|
Definition
nominal ordinal interval ratio |
|
|
Term
|
Definition
o Lowest scale of measurement o Variables whose vales differ by category (e.g. male/female) o Values of variable have different names but no ordering of values is implied o You can cound numbers of observations falling into categories but cannot apply mathematical operations |
|
|
Term
|
Definition
o Higher scale of measurement than nominal scale o Different values of a variable can be ranked according to quantity (e.g. high moderate, or low self-esteem) o Mathematical operations likely to produce misleading results because they are still categorical – because you are putting people into categories, but they are rank ordered |
|
|