Term
Methods of acquiring knowledge
Tenacity |
|
Definition
From habit or superstition |
|
|
Term
Methods of acquiring knowledge
Intuition |
|
Definition
From a hunch or feeling
If you “know” that you do not want to ride the new rollercoaster even though your friends say that it is really fun, then your decision is based on |
|
|
Term
Methods of acquiring knowledge
Authority |
|
Definition
From an expert
Seeking answers by reading a chapter in a college textbook is an example of using the |
|
|
Term
Methods of acquiring knowledge
Rationalism |
|
Definition
From reasoning; a logical conclusion
A group of students in a cooking class is trying to find a faster way to bake a cake. They know that it takes 30 minutes to bake a cake at 350 degrees, so they figure that it should take only 15 minutes at 700 degrees. These students are using the to solve the problem. |
|
|
Term
Methods of acquiring knowledge
Empiricism |
|
Definition
From direct sensory observation
You find some mushrooms growing in your backyard and want to find out whether or not they are poisonous, so you eat a few and see what happens. This is an example of the method of knowing or acquiring knowledge. |
|
|
Term
|
Definition
attempts to answer questions by direct observation or personal experience. -product of the empirical viewpoint in philosophy = all knowledge is acquired through the senses. -the empirical method is the practice of employing direct observation as a source of knowledge. evidence or observation with one’s senses is required for verification of information. |
|
|
Term
Science versus Pseudoscience |
|
Definition
The primary distinction between science and pseudoscience is based on the notion of testable and refutable hypotheses. Specifically, a theory is scientific only if it can specify how it could be refuted. Science demands an objective and unbiased evaluation of all the available evidence. Unless a treatment shows consistent success that cannot be explained by other outside factors, the treatment is not considered to be effective. Pseudoscience, on the other hand, tends to rely on subjective evidence such as testimonials and anecdotal reports of success. Pseudoscience also tends to focus on a few selected examples of success and to ignore instances of failure. |
|
|
Term
The Steps of the Scientific Method
Step 1 |
|
Definition
Observe Behavior or Other Phenomena(begins with casual or informal observations)At this stage in the process, people commonly tend to generalize beyond the actual observations. The process of generalization is an almost automatic human response known as induction, or inductive reasoning. |
|
|
Term
The Steps of the Scientific Method
Step 2 |
|
Definition
Form Tentative Answer/Explanation(Hypothesis)Begins by identifying other factors, or variables, that are associated with your observation.Choose the explanation that you consider to be most plausible, or simply pick the one that you find most interesting...you have a hypothesis, or a possible explanation, for your observation. aYour hypothesis is not considered to be a final answer BUT IS a tentative answer that is intended to be tested and critically evaluated. |
|
|
Term
The Steps of the Scientific Method Step 3 |
|
Definition
Use Hypothesis to GenerateTestable Prediction. Take the hypothesis and apply to specific, observable, real-world situation.a single hypothesis can lead to several different predictions and that each prediction refers to a specific situation or an event that can be observed and measured.we are using logic (rational method) to make the prediction. This time, the logical process is known as deduction, or deductive reasoning. We begin with a general (universal) statement and then make specific deductions. In particular, we use our hypothesis as a universal premise statement and then determine the conclusions or predictions that must logically follow if the hypothesis is true. |
|
|
Term
The Steps of the Scientific Method Step 4 |
|
Definition
Evaluate the Prediction by Making Systematic, Planned Observations After a specific, testable prediction has been made (the rational method), the next step is to evaluate the prediction using direct observation (the empirical method). This is the actual research or data collection phase of the scientific method. The goal is to provide a fair and unbiased test of the research hypothesis by observing whether the prediction is correct. |
|
|
Term
|
Definition
The Process of Scientific Inquiry
a circular process |
|
|
Term
The Steps of the Scientific Method Step 5 |
|
Definition
Use the Observations to Support, Refute, or Refine the Original Hypothesis The final step of the scientific method is to compare the actual observations with the predictions that were made from the hypothesis. To what extent do the observations agree with the predictions? Some agreement indicates support for the original hypothesis, and suggests that you consider making new predictions and testing them. Lack of agreement indicates that the original hypothesis was wrong or that the hypothesis was used incorrectly, producing faulty predictions. In this case, you might want to revise the hypothesis or reconsider how it was used to generate predictions. In either case, notice that you have circled back to Step 2 |
|
|
Term
|
Definition
Examples of Induction and Deduction
|
|
|
Term
Induction, Inductive Reasoning |
|
Definition
Inductive reasoning involves reaching a general conclusion based on a few specific examples. |
|
|
Term
|
Definition
Characteristics or conditions that change or have different values for different individuals. |
|
|
Term
|
Definition
The use of a general statement as the basis for reaching a conclusion about specific examples. Also known as deductive reasoning. |
|
|
Term
The scientific method consists of five steps:
-
observation of behavior or other phenomena;
-
formation of a tentative answer or explanation, called a hypothesis;
-
use of the hypothesis to generate a testable prediction;
-
evaluation of the prediction by making systematic, planned observations; and
-
use of the observations to support, refute, or refine the original hypothesis.
|
|
Definition
|
|
Term
Characteristics of a Good Hypothesis
a good hypothesis must be testable
|
|
Definition
A hypothesis for which all of the variables, events, and individuals are real and can be defined and observed.; that is, it must be possible to observe and measure all of the variables involved. In particular, the hypothesis must involve real situations, real events, and real individuals. |
|
|
Term
Characteristics of a Good Hypothesis
it must be refutable |
|
Definition
it must be possible to obtain research results that are contrary to the hypothesis.
A hypothesis that can be demonstrated to be false. That is, the hypothesis allows the possibility that the outcome will differ from the prediction. |
|
|
Term
Characteristics of a Good Hypothesis
must make a positive statement |
|
Definition
that it must make a positive statement about the existence of something, usually the existence of a relationship, the existence of a difference, or the existence of a treatment effect. |
|
|
Term
|
Definition
In the behavioral sciences, statements about the mechanisms underlying a particular behavior.
In attempting to explain and predict behavior, scientists and philosophers often develop theoriestheoriesIn the behavioral sciences, statements about the mechanisms underlying a particular behavior. that contain hypothetical mechanisms and intangible elements. |
|
|
Term
Hypothetical Constructs
Constructs |
|
Definition
Hypothetical attributes or mechanisms that help explain and predict behavior in a theory. Also known as hypothetical constructs.Many research variables, particularly variables of interest to behavioral scientists, are in fact hypothetical entities created from theory and speculation.
These mechanisms/elements cannot be seen and assumed to exist, we accept them as real because they describe and explain behaviors that we see. |
|
|
Term
|
Definition
A procedure for indirectly measuring and defining a variable that cannot be observed or measured directly. An operational definition specifies a measurement procedure (a set of operations) for measuring an external, observable behavior and uses the resulting measurements as a definition and a measurement of the hypothetical construct.
Operational definitions are used as a basis for measuring variables and can be used to define variables to be manipulated. |
|
|
Term
External
|
|
|
|
External
|
Stimulus
|
→
|
Construct
|
→
|
Behavior
|
Hypothetical Constructs
Constructs
|
|
Definition
|
|
Term
|
Definition
The degree to which the measurement process measures the variable it claims to measure.
By demonstrating that two or more different methods of measurement produce strongly related scores for the same construct(convergent validity)with a weak relationship between the measurements for two distinct constructs (divergent validity), you can provide very strong evidence of validity that you are actually measuring the construct that you intend to measure. |
|
|
Term
|
Definition
simplest and least scientific definition of validity. Face validity concerns the superficial appearance, or face value, of a measurement procedure. Does the measurement technique look like it measures the variable that it claims to measure? |
|
|
Term
|
Definition
The type of validity demonstrated when scores obtained from a new measure are directly related to scores obtained from a more established measure of the same variable. |
|
|
Term
|
Definition
The type of validity demonstrated when scores obtained from a measure accurately predict behavior according to a theory |
|
|
Term
|
Definition
The type of validity demonstrated when scores obtained from a measurement behave exactly the same as the variable itself. Construct validity is based on many research studies and grows gradually as each new study contributes more evidence. |
|
|
Term
|
Definition
The type of validity demonstrated by a strong relationship between the scores obtained from two different methods of measuring the same construct. |
|
|
Term
|
Definition
A type of validity demonstrated by using two different methods to measure two different constructs. Convergent validity then must be shown for each of the two constructs. Finally, there should be little or no relationship between the scores obtained for the two different constructs when they are measured by the same method. |
|
|
Term
|
Definition
The second criterion for evaluating the quality of a measurement procedure is called reliabilityreliabilityThe degree of stability or consistency of measurements. If the same individuals are measured under the same conditions, a reliable measurement procedure will produce identical (or nearly identical) measurements.. A measurement procedure is said to have reliability if it produces identical (or nearly identical) results when it is used repeatedly to measure the same individual under the same conditions. |
|
|
Term
|
Definition
The type of reliability established by comparing the scores obtained from two successive measurements of the same individuals and calculating a correlation between the two sets of scores. |
|
|
Term
Reliability
parallel-forms reliability |
|
Definition
The type of reliability established by comparing scores obtained by using two alternate versions of a measuring instrument to measure the same individuals and calculating a correlation between the two sets of scores. |
|
|
Term
|
Definition
The degree of agreement between two observers who simultaneously record measurements of a behavior. |
|
|
Term
|
Definition
A measure of reliability obtained by splitting the items on a questionnaire or test in half, computing a separate score for each half, and then measuring the degree of consistency between the two scores for a group of participants. |
|
|
Term
|
Definition
The clustering of scores at the high end of a measurement scale, allowing little or no possibility of increases in value; a type of range effect. |
|
|
Term
|
Definition
The clustering of scores at the low end of a measurement scale, allowing little or no possibility of decreases in value; a type of range effect. |
|
|
Term
|
Definition
The influence on the findings of a study from the experimenter’s expectations about the study. Experimenter bias is a type of artifact and threatens the validity of the measurement, as well as both internal and external validity. |
|
|
Term
|
Definition
A research study in which the researcher does not know the predicted outcome for any specific participant |
|
|
Term
|
Definition
A research study in which both the researcher and the participants are unaware of the predicted outcome for any specific participant. |
|
|
Term
|
Definition
Any potential cues or features of a study that (1) suggest to the participants what the purpose and hypothesis are, and (2) influence the participants to respond or behave in a certain way. Demand characteristics are artifacts and can threaten the validity of the measurement, as well as both internal and external validity. |
|
|
Term
|
Definition
The different ways that participants respond to experimental cues based on whatever they judge to be appropriate in the situation. Also known as subject role behavior. |
|
|
Term
|
Definition
In a study, a participant’s tendency to respond in a way that is expected to corroborate the investigator’s hypothesis. |
|
|
Term
negativistic subject role |
|
Definition
In a study, a participant’s tendency to respond in a way that is expected to refute the investigator’s hypothesis. |
|
|
Term
apprehensive subject role |
|
Definition
In a study, a participant’s tendency to respond in a socially desirable fashion rather than truthfully. |
|
|
Term
|
Definition
In a study, a participant’s attempt to follow experimental instructions to the letter and to avoid acting on the basis of any suspicions about the purpose of the experiment. |
|
|
Term
|
Definition
A research setting that is obviously devoted to the discipline of science. It can be any room or space that the subject or participant perceives as artificial.
Reactivity is especially a problem in studies conducted in a laboratorylaboratoryA research setting that is obviously devoted to the discipline of science. It can be any room or space that the subject or participant perceives as artificial., where participants are fully aware that they are participants in a study.
|
|
|
Term
|
Definition
Any research setting that the participant or subject perceives as a natural environment.
participants are observed in their natural environment and are much less likely to know that they are being investigated; hence, they are less reactive.
|
|
|
Term
|
Definition
A set of 10 guidelines for the ethical treatment of human participants in research. The Nuremberg Code, developed- Nuremberg Trials 1947, is groundwork for the current ethical standards for medical/psychological research. |
|
|
Term
|
Definition
In 1972, a newspaper report exposed a Public Health Service study, commonly referred to as the Tuskegee study, in which nearly 400 men had been left to suffer with syphilis long after a cure (penicillin) was available. The study began as a short-term investigation to monitor untreated syphilis, but continued for 40 years just so the researchers could examine the final stages of the disease |
|
|
Term
Milgram obedience study (Milgram, 1963) |
|
Definition
The participants entered the study thinking that they were normal, considerate human beings, but they left with the knowledge that they could all too easily behave inhumanely |
|
|
Term
Ethical issues, tuskegee, milgram |
|
Definition
It is important to note two things about these cases. First, although they constitute a very small percentage of all the research that is conducted, many examples of questionable treatment exist. Second, it is events like these that shaped the guidelines we have in place today. |
|
|
Term
Major Ethical Issues
No harm |
|
Definition
The researcher is obligated to protect participants from physical or psychological harm. The entire research experience should be evaluated to identify risks of harm, and when possible, such risks should be minimized or removed from the study. Any risk of harm must be justified. The justification may be that the scientific benefits of the study far outweigh the small, temporary harm that can result. Or if greater harm is likely to occur unless some minor study risk is accepted. |
|
|
Term
Major Ethical Issues
informed consent |
|
Definition
The ethical principle requiring the investigator to provide all available information about a study so that a participant can make a rational, informed decision regarding whether to participate in the study. |
|
|
Term
Major Ethical Issues
deception |
|
Definition
deception The purposeful withholding of information or misleading of participants about a study. There are two forms of deception: passive and active. |
|
|
Term
Major Ethical Issues
debriefing |
|
Definition
A post-experimental explanation of the purpose of the study. A debriefing is given after a participant completes a study, especially if deception was used. |
|
|
Term
Major Ethical Issues
Confidentiality |
|
Definition
The practice of keeping strictly secret and private the information or measurements obtained from an individual during a research study. APA ethical guidelines require researchers to ensure the confidentiality of their research participants. |
|
|
Term
Major Ethical Issues
Anonymity |
|
Definition
The practice of ensuring that an individual’s name is not directly associated with the information or measurements obtained from that individual. Keeping records anonymous is a way to preserve the confidentiality of research participants. |
|
|
Term
Major Ethical Issues
Active versus Passive deception |
|
Definition
researchers sometimes do not tell participants the true purpose of the study. One technique is to use passive deceptionpassive deceptionThe intentional withholding or omitting of information whereby participants are not told some information about the study. Also known as omission., or omission, and simply withhold information about the study. Another possibility is to use active deceptionactive deceptionThe intentional presentation of misinformation about a study to its participants. The most common form of active deception is misleading participants about the specific purpose of the study. Also known as commission., or commission, and deliberately present false or misleading information. In simple terms, passive deception is keeping secrets and active deception is telling lies. |
|
|
Term
Major Ethical Issues
The Institutional Review Board |
|
Definition
Institutional Review Board (IRB)
committee that examines all proposed research with respect to its treatment of human participants. IRB approval must be obtained prior to start of research w/human participants. |
|
|
Term
Major Ethical Issues
Institutional Animal Care and Use Committee (IACUC) |
|
Definition
A committee that examines all proposed research with respect to its treatment of nonhuman subjects. IACUC approval must be obtained prior to conducting any research with nonhuman subjects. |
|
|
Term
Fraud in Science
Fraud versus Error |
|
Definition
An error
is an honest mistake that occurs in the research process. Researchers are human and make mistakes. It is the investigator’s responsibility double-check the data to minimize the risk of errors.
Fraud
The explicit efforts of a researcher to falsify and misrepresent data. Fraud is unethical., on the other hand, is an explicit effort to falsify or misrepresent data. |
|
|
Term
Safeguards against Fraud
replication
|
|
Definition
Repetition of a research study with the same basic procedures used in the original study. The intent of replication is to test the validity of the original study. Either the replication will support the original study by duplicating the original results, or it will cast doubt on the original study by demonstrating that the original result is not easily repeated.
|
|
|
Term
Safeguards against Fraud
plagiarism |
|
Definition
Plagiarism, like fraud, is a serious breach of ethics. Reference citations (giving others credit when credit is due) must be included in your paper whenever someone else’s ideas or work has influenced your thinking and writing. Presenting someone else’s ideas or words as one’s own. Plagiarism is unethical.
|
|
|
Term
Paragraph Science vs. Psuedoscience |
|
Definition
A primary features that differentiates science from pseudoscience is that science is intended to provide a carefully developed system for answering questions so answers obtained are accurate and complete as possibles. Pseudosciencesare a set of ideas based on nonscientific theory, faith/belief which are often presented as science but components essential to scientific research. These elements indlude
a testable and refutable hypothesis,
an objective and unbiased evaluation of all evidence,
theories that constantly evolve & adapt w/new evidence
theories grounded in past science w/solid empirical support. |
|
|
Term
Distinguish between induction and deduction and describe how each is used in the scientific method. |
|
Definition
Inductive reasoning involves reaching a general conclusion based on a few specific examples
and reaches far beyond the actual observations.
Inductive reasoning is used in
Step 1: Observe Behavior or Other Phenomena
by using a few examples to come up with a generalization.
In Step 3: Use Your Hypothesis to Generate a Testable Prediction,
We begin with a general statement and then make specific deductions using our hypothesis and then determining conclusions/predictions that logically follow if the hypothesis is true.
deduction helps the general statement reach for a specific conclusion about specific examples.
|
|
|
Term
Describe one of the modalities of measurement and discuss both the advantages and disadvantages and the limitations of this modality |
|
Definition
The main advantage of a self-report measures is that it is a direct way to assess a construct. We presume that Each individual is most in tune with their own self-knowledge/awareness. Their answer to a direct question should have more validity than measuring by response methods. On the negative side it is easy for participants to distort their report. The distortion can be done to create a better self-image, or in response to many aspects of the research situation. Validity of the measurement is undermined when self-report responses are distorted by the participant.
A limitation is possible that it is always possible participants will distort or incorrectly answer questions. Therefore, the results actually show that the participants report the findings and they are not definitive. |
|
|
Term
What is meant by the reliability of measurement and describe three methods for measuring reliability.
|
|
Definition
A measurement procedure is said to have reliability if it produces identical (or nearly identical) results when it is used repeatedly to measure the same individual under the same conditions.
The concept of reliability is based on the assumption that the variable being measured is stable or constant.
Successive measurements called test-retest compares the scores obtained from two successive measurements of the same individuals and calculating a correlation. inter-rater reliability The degree of agreement between two observers who simultaneously record measurements of a behavior.
Simultaneous measurements measure reliability in terms of the internal consistency among the many items that make up a test or questionnaire. Each individual records observations, and the degree of agreement between the two observers is called inter-rater reliability.
Internal consistencytells us that no single item or question is sufficient to provide a complete measure of the construct With split-half reliability the items on a questionnaire or test and split in half then computing a each half seperatelyin order to measure the degree of consistency between the two scores for the participant group. |
|
|
Term
Briefly explain why a researcher might find it necessary to use deception in a research study.
|
|
Definition
Decpetion is used in situations in which completely informing the participants would undermine the goals of the research.Though some information may be disguised, concealed, or simply unknown, the participants MUST be informed of any known potential risks. |
|
|
Term
Explain the role of the IRB.
|
|
Definition
Most human-participant research must be reviewed and approved by a group of individuals not directly affiliated with the specific research study. The U.S. Department of Health and Human Services (HHS) requires review of all human-participant research conducted by government agencies and institutions receiving government funds with respect to seven basic criteria.
- Minimization of Risk to Participants
-
Reasonable Risk in Relation to Benefits.
-
Equitable Selection.
-
Informed Consent.
-
Documentation of Informed Consent
-
Data Monitoring.
-
Privacy and Confidentiality.
The IRB requires a written research proposal that addresses each of the seven criteria. Research proposals are classified into three categories 1Category I (Exempt Review) if the research presents no possible risk to adult participants. 2. Category II (Expedited Review) if the research presents no more than minimal risk to participants. Category III (Full Review) is used for research proposals that include any questionable elements. A meeting of all of the IRB members is required, and the researcher must appear in person to discuss, explain, and answer questions about the research.
Throughout the process, the primary concern of the IRB is to ensure the protection of human participants. |
|
|