Term
|
Definition
once we give a construct a definition it becomes a concept. |
|
|
Term
|
Definition
an abstract idea, no real agreed upon definition. |
|
|
Term
|
Definition
Starts with background experience.
Starts blindly and then comes up with a theory to explain what happens.
Comes last.
Use data to framework theories.
Moves from specific data to more general.
Likes Objectivity because it starts blind and tries to make sense of anything. |
|
|
Term
|
Definition
coming up with theories from hisotry.
Theory driven
Comes first.
Data is used to determine whether theory is correct.
research stems from known position as to how the data will develop. |
|
|
Term
|
Definition
broad group of people
All unites possessing the attributes or characteristics in which the research is interested
Determines by researcher where primary interest lies
Goal is to understand this population by viewing a subset of it |
|
|
Term
|
Definition
those members of the population that we are capable of studying.
the set of units that have a chance to be sampled.
Rarely exactly the same as the population, some people are unavailable.
|
|
|
Term
|
Definition
people that we care about, that we can reach, that are part of the study.
data collected from sample used to make inferences about population
Stats allow us to do this. |
|
|
Term
|
Definition
deal with the connection between two variables. Is there a correlation between morning people and 9am attendance. |
|
|
Term
|
Definition
make a prediction in a certain way, "I think morning people will have better attendance"- talk about the way things are gong to work- make an inference in your hypothesis |
|
|
Term
Non-directional Hypothesis: |
|
Definition
safer that directional hypothesis. Says that there will be a difference but doesn't specify what this difference will be. |
|
|
Term
|
Definition
mess with what we care about. Stands in the middle of our two variables and bridges them together.
- job performance leading to job satisfaction, there is another variable, rewards, that actually connects these two. |
|
|
Term
|
Definition
variable that confuses, or obscures the results- stands in the way instead of bridging. |
|
|
Term
|
Definition
talks about the idea of being certain nothing within the experiment is going to destroy the results.- Taking the exam in a lab instead of the classroom.- everything is exactly equal for everyone. |
|
|
Term
|
Definition
How the experiment will stand up in real life. How the lab settings compare to the real world. |
|
|
Term
Relationship between internal and external validity: |
|
Definition
Inversely related: as internal validity goes up, external goes down. |
|
|
Term
|
Definition
part of the process.
Not really an independent variable.
Not something we manipulated in the study
i.e- age |
|
|
Term
|
Definition
result of a predictor variable. |
|
|
Term
|
Definition
Studying aggression, looking at stress levels, anger levels..ect.
How directly things should correlate with each other. |
|
|
Term
What does communication Research do? |
|
Definition
Describes: outcomes processes or relationships between variables
Determines Causes: of communication behavior. intervention programs, real world application of research
Prediction: improve decision-making and precautionary measures.
Explanation: Understanding why a behavior occurs can help us adjust or modify behaviors
Control Outcomes |
|
|
Term
|
Definition
educated guess or proposition about relationship between 2 or more variables. |
|
|
Term
|
Definition
ask what tentative relationship might exist between variables. |
|
|
Term
Scientific vs Everyday ways of knowing |
|
Definition
1. personal experience
2. intuition "it makes sense"
3. Authority "someone told me so"
4. democratic dialogue (the best ideas emerge when you assume all voices have equal weight.)
5. appeals of faith: lack logical proof or material evidence. |
|
|
Term
Qualities of scientific research: |
|
Definition
1. based on evideence
2. propositions are testable
3. explores all possible explanations
4. results are replicable
5. results are made public
6. scientific research has a self correcting nature
7. relies on measurement and observation
8. recognizes the possibility of error- tries to control it
9. objectivity requires minimization of bias and distortion
10. scientists are skeptical
11. interested in generalizability
12. heuristic in nature (the ability to suggest new questions or new methods of conducting research) |
|
|
Term
Heinsenberg's Uncertainty Principle: |
|
Definition
Error in instruments
anything that can happen can happen by chance
All measurement contains error
you will never perfectly capture something through an experiment
Monkeys typing Shakespeare. |
|
|
Term
|
Definition
some researchers use inappropriate tests to test research |
|
|
Term
How to evaluate your sources on the topics: |
|
Definition
- Look out for authors of multiple papers; publication date.
- Read the abstracts
- Read the literature reviews- identify theoretical foundations (assembly of different articles on the topic with synthesize)
- Read discussion Sections- what was found? what still needs to be found? is the topic worth further exploration? where can you fit in? |
|
|
Term
6 Steps of Theory Building: |
|
Definition
1. describe an event or observation that needs understanding
2. create logical explanation
3. move from specific explanation to more general application
4. derive predictions from the theory built
5. select a focus and test theory
6. use results to confirm, revise, expand, generalize, or abandon developed theory- theory flawed? or method flawed? |
|
|
Term
Breaking Down The Lit Review:
|
|
Definition
- Frames the research investigation; puts study in perspective
- Breakdown and analysis of variables studied (including history)
- Not just summary: Analysis, Synthesis, critique (acknowledge drawbacks) |
|
|
Term
|
Definition
Justifies study
Near beginning of lit review
Includes empirical repot findings, theoretical articles, previous similar studies and broad examination of an entire problem area. |
|
|
Term
|
Definition
Beware of stylistic flaws
Beware of unclear careless writing; unspecified assumptions
Make sure report fits within context of body of research
Make sure all hypothesis are clear and specific
Make sure all key terms are defined theoretically and operationally |
|
|
Term
|
Definition
states there is an existence of a functional relationship between two variables |
|
|
Term
|
Definition
States there is the expectation of a difference between 2 or more groups. |
|
|
Term
|
Definition
states that no relationship (that no relationship other than by random chance) exists between variables |
|
|
Term
|
Definition
used when we cannot make a prediction. |
|
|
Term
|
Definition
the variable that is manipulated or varied to see the effect on other variables.
- used when researchers manipulates variables, when researcher does not manipulate the variable we call it the predictor variable. |
|
|
Term
|
Definition
Variable influenced or changed by independent variable- this is the variable of prime interest.
- when researcher does not manipulate 1st variable it is called the outcome variable. |
|
|
Term
|
Definition
process of determining how we'll measure a variable- requires interpretation. |
|
|
Term
An operational definition must... |
|
Definition
- describe a unit of measument
- specify a level of measurement
- provide a logical/mathematical statement about how measurement is supposed to be made and combined. |
|
|
Term
Covariance/correlation relationship: |
|
Definition
variables change simultaneously allows for explanation, but prediction positive negative curvilinear. |
|
|
Term
|
Definition
change in one variable (independent/antecedent) results in change in other variables (dependent/consequent) |
|
|
Term
|
Definition
- Space and time contiguity
- Covariance
- Temporal Ordering
- Necessary connection |
|
|
Term
|
Definition
when the presence of a 3rd variable obscures the nature of the relationship between two variables.
It can...
create a relationship that doesnt exist
inflate a relationships strength
deflate a relationships strength |
|
|
Term
Controlling Manipulated Control: |
|
Definition
1. identify the potential confound variable
2. hold it constant
- equalize assignment of cases based on the value of the confound variable across groups- (experiment control groups have equal proportions of the confound variable) |
|
|
Term
Controlling Statistical control: |
|
Definition
- identify potential confounding variable
- measure the variable and include it as a control variable
- mathematically remove the effect of the variable from the variation of the dependent variable.
- access a unique effect of X on Y with the effect of the confounding variable on dependent variable removed. |
|
|
Term
Controlling Randomization: |
|
Definition
no need to identify the potential confounding variables
- random assignment to groups representing levels of independent variable (ex. experimental/control groups)
- assures that effects of other variables are equalized across groups or any cross in assignments are random. |
|
|
Term
|
Definition
Quantitative research involves using measurement to gather data to help answer our questions. |
|
|
Term
|
Definition
device/instrument used
How is the device used
Skill of person using device
Attributes or characteristics being measured
These dimensions of measurement can/will impact data
The proper use of measurement influences the accuracy. |
|
|
Term
|
Definition
- level used to measure is based on your operationalization.
- determines how you will measure a concept
-must specify a level & unit of measure. |
|
|
Term
|
Definition
Nominal Data (lowest)
Ordinal Data
Interval Data
Ratio Data (highest)
- each level has all the characteristics of the preceding level
- higher levels offer greater precision. |
|
|
Term
|
Definition
- describes presence of absense of some characteristic or attitude.
- no way to express partial presence.
- any value imposed on the categories is arbitrary
- minimum of two categories, must be mutually exclusive and exhaustive
- type or category based, categories have no logical rank or level (also known as categorical data EX. M&F) |
|
|
Term
|
Definition
- Measured by ranking elements in logical order from lowest to highest
- sequencing of data without precise measurement.
- rankings are relative (there can be no zero)
- logical rank order, but no logical distance between elements
- determined by "more" or "less", but we don't know how much "more" or "less"
EX- class standing |
|
|
Term
|
Definition
- measured based on specific numeric scores or values
- distance between any two adjacent points assumed to be equal
- zero is arbitrary, there is no absolute zero
-rank order, values equality spread apart, but no true zero point. (EX. using a 1-5 scale) |
|
|
Term
|
Definition
- interval level data qualities, but also have an absolute zero
- most variables that meet interval-level requirements also meet requirements for ratio level (EX.- Number of children in a household 0-5)
- allows us to compare two people on variable and determine the ratio of one person's value to another person's
- higher scale= raw numbers, lower scale= arbitrary continuum |
|
|
Term
Issues in levels of measurement: |
|
Definition
-Level you decide to measure your variables at should be guided by analysis you will later do
- Certain stats require certain types (levels) of data
- normally best to measure at highest possible level
- can always modify to a lower level of data, but can never modify to a higher level of data. |
|
|
Term
|
Definition
truthfulness/accuracy of measurement. |
|
|
Term
|
Definition
consistency/stability of measurement
-reliability is necessary, but not sufficient for validity.
- reliability refers to how consistent your measure is at measuring what is supposed to be measured. |
|
|
Term
|
Definition
Administering the same measure multiple times and assessing how consistent the responses are
*the more time between responses the lower the reliability |
|
|
Term
Alternate form (parallel form) reliability: |
|
Definition
Generate two versions of the same measurement using items from same pool. Administer both versions to the same group.
*if results from both versions are similar, strong alternate form reliability. |
|
|
Term
|
Definition
randomly split questionnaire in half, giving each half to different groups. Consistent results across both groups indicate reliable measure
(EX. even & odd) |
|
|
Term
|
Definition
Compares responses to individual items to the responses of all items. |
|
|
Term
|
Definition
the extent to which conclusions developed from data collected from a sample can be extended to the population
- the goal of sampling is to have a representative sample that is highly generalizable |
|
|
Term
Equal Likelihood Principle: |
|
Definition
every element in sample has equal chance of being selected; ensures representativeness. |
|
|
Term
|
Definition
the degree to which a sample is NOT representative. |
|
|
Term
|
Definition
the degree to which a sample favors one attribute or characteristic of the population more than other. |
|
|
Term
|
Definition
when potential observations from population are excluded from the sample by systematic means. |
|
|
Term
|
Definition
when cases systematically exclude themselves from the sample (by "selecting out") |
|
|
Term
|
Definition
Can be helped by using random sampling. Scientists cannot predict who will be in the sample, no bias is introduced. |
|
|
Term
|
Definition
Can be combated by oversampling, deliberately selecting more of the types of cases that are likely to select out of the research (weighting cases can also reduce bias) |
|
|
Term
|
Definition
degree to which sample differs from populations characteristics on some measurement.
Degree to which our sample is not representative.
the random error that occurs in any sample- unpredictable things happen.
sampling error reduced by increasing sample size |
|
|
Term
|
Definition
some form of random selection ensures anyone can be sampled
probability for any element being selected is equal and known for every element in the sample.
randomization eliminates researcher bias in sampling. |
|
|
Term
Non- probability sampling: |
|
Definition
techniques do not use any form of random selection
used when randomizing isn't feasible or phenomena (believe to) doesn't exist throughout population (IE- diseases.) |
|
|
Term
|
Definition
randomly choose elements from sampling frame. requires a full sample frame (knowing all available elements) |
|
|
Term
Systematic Random Sampling: |
|
Definition
a systematic choice procedure beginning with a random choice, then every Nth unit. |
|
|
Term
Stratified (known quota) sampling: |
|
Definition
population divided based on subgroups of interest; elements from each group randomly selected proprtional to the whole. |
|
|
Term
|
Definition
identify groups/clusters within population of interest. randomly select. |
|
|
Term
Why use non-probability sampling? |
|
Definition
- no other technique can be reasonably used (ie- time or money)
- when we believe that a variable of interest are evenly distributed across the population of interested
- when looking at special cases. |
|
|
Term
|
Definition
Selecting people who are convenient to the researcher (ie- college students). This is a frequently used technique in social science research. |
|
|
Term
Inclusion/exclusion criteria: |
|
Definition
selecting/not selecting people based on the fact that they meet/do not meet some specific characteristics. |
|
|
Term
|
Definition
when participants help by identifying other similar participants. Relies on the help of the research participants to get full sample. |
|
|
Term
|
Definition
actively seeking individuals who fit a specific profile. Researcher does the recruitment- not the participants. |
|
|
Term
Post-test only experimental design: |
|
Definition
Do 2 groups differ after treatment is presented to only one?
-if yes, we attribute an effect to the independent variable
Remember, multiple groups can receive some form of the manipulation. |
|
|
Term
Pretest-Posttest Experimental Design: |
|
Definition
-random assignment should provide equivalent groups.
- but w/ only a post-test we have no way of being sure.
- allows us to test equality of groups at a time.
- introduces some chance of testing effect.
- we can compare groups and prepost questions.
helps control for the testing effect and combo between posttest and pretest. |
|
|
Term
Factorial Experiment Design: |
|
Definition
researcher manipulates 2 or more independent variables.
-used to explain complex cause-effect relationships, that cannot be explained by one independent variable. |
|
|
Term
Main Effect of Factorial Experimental Design: |
|
Definition
influence of each individual independent variable on the dependent variable. |
|
|
Term
Interaction Effect of the Factorial Experimental Design: |
|
Definition
combined influence of the independent variables on the dependent variables |
|
|
Term
|
Definition
multiple measurements of the dependent variable over time
- can be applied to post-test only or pre-post design. |
|
|
Term
Experimental Design Conclusions: |
|
Definition
-easy way to establish causality
- cannot be applied to all communication phenomena
- strong in terms of internal validity
- weak in terms of external validity |
|
|
Term
Quasi Experimental Designs: |
|
Definition
no control over the independent variable because it varies naturally. |
|
|
Term
Descriptive Design (cross-sectional or non experimental studies): |
|
Definition
-researcher does not control the manipulation of the independent variables
- participants are not randomly assigned to the conditions.
-causation cannot be determined.
- Predictor and criterion (do not imply causality) are better labels for variables, as the do not imply causality. |
|
|
Term
|
Definition
|
|
Term
|
Definition
how well a measure covers range/dimensions of meaning on a subject. |
|
|
Term
|
Definition
same as face validity, but using experts as judges. |
|
|
Term
|
Definition
validity in relation to come external criterion. |
|
|
Term
|
Definition
does it measure accurately predict future behaviors?
do those scoring high on apprehension scale behave apprehensively? |
|
|
Term
|
Definition
does the measure concur with other measures? |
|
|
Term
|
Definition
how well scale compares to other scales which measure related and unrelated theoretical concepts. |
|
|
Term
Convergent construct validity: |
|
Definition
does measure correlate highly with theoretically similar measure?
ie- shyness scale with communication appregension scale
- depression scale with low self esteem scale. |
|
|
Term
Discriminent Construct Validity: |
|
Definition
does the measure correlate negatively with measures that are theoretically different? |
|
|
Term
Threats to Validity and Reliability: |
|
Definition
threat is any data related problem that can lead to false conclusions from the data. |
|
|
Term
Threats due to data collection problems: |
|
Definition
-Instrument itself: is it outdate? bias?
-Pygmalion Effect: does instrument inadvertently suggest desired response?
-Maturation: people change or become familiarized with questionnaire.
-Mortality/Attribution: people leave the study |
|
|
Term
Threats due to Sampling Issues: |
|
Definition
Selection: sampling techniques favor certain people, or certain people self-select out of the study. |
|
|