Term
|
Definition
- The science of summarizing the characteristics of sample data
- Provide a summary of characteristics that have been observed in a sample
- NOT used to infer about people or objects unobserved
- often helpful to compile and present the data in pictoral form
- allows to visualize patterns exhibited by the variable being studied
- especially helpful when sample is large
|
|
|
Term
|
Definition
- A logical system for estimating population characteristics based on sample descriptions
- provide a basis for estimating population characteristics from knowledge of sample
- ARE used to infer about people or objects unobserved
|
|
|
Term
|
Definition
- assignment to categories
- graphs showing nominal data (e.g., bar graph, frequency polygon, Likert scale)
- assignment to points on a scale
- graphs showing ordinal data (e.g., histogram)
|
|
|
Term
Interpreting graphic descriptions |
|
Definition
- Frequency of assignment
- Different shapes of distributions
|
|
|
Term
|
Definition
- indicate general characteristics of distribution
- bell-shaped
- peaked
- flat
- skewed
|
|
|
Term
|
Definition
- normal curve
- normal distribution
|
|
|
Term
peaked (homogeneous) distribution |
|
Definition
- measurement shows little discrimination of characteristics
- no variance
|
|
|
Term
Flat/uniform (heterogeneous) distribution |
|
Definition
- measurement shows greater discrimination of characteristics
- data are scattered quite evenly along a measurement scale
|
|
|
Term
skewed (positive or negative) distribution |
|
Definition
- shows that characteristics tend to cluster away one end of scale
- negative (left) looks like positive correlation
- positive (right) looks like negative correlation
|
|
|
Term
|
Definition
- how scores tend to cluster
- refers to statistical measures of the clustering features of data
|
|
|
Term
indices of central tendency |
|
Definition
- mode
- median
- mean
- refer to statistical measures of the clustering features of data
|
|
|
Term
|
Definition
- the most frequent score/observation
- indicator of central tendency
- best for nominal data
- preferred index for representing the central tendency of nominal data
- little value as a measure of central tendency for non-categorical data
- can be bimodal if there is more than one most frequent score
|
|
|
Term
|
Definition
- the midpoint of scores
- a computed value or point rather than an actual sample score: the point may coincide with an actual score or it may be some value falling between two actual scores
- can be computed for ordinal, interval, and ratio data, but not for nominal measurements
- best for ordinal data
|
|
|
Term
|
Definition
- point at which scores above and below average out to zero
- the arithmetic average
- most positive measure of data
- sensitive to all the scores in a sample representing what we might call a distribution’s center of gravity
- because the mean is sensitive to all scores in a sample, extreme or outlying data in the sample can greatly affect this index of central tendency
- best for interval and ratio
|
|
|
Term
|
Definition
-
- range
- variance
- standard deviation
- tell us about the variations among distribution scores - specifically, howa distribution's scores scatter about the "average" score
|
|
|
Term
|
Definition
- the distance from the highest to lowest score in a dsitribution
- simplest of all dispersion indices to calculate
- weak as a dispersion index
- seldom meaningful as a sole dispersion index for ordinal, interval, and ratio data
- frequently useful in combination with other measures of dispersion
|
|
|
Term
|
Definition
- sensitive to all distributions
- derived from the sum of the squared deviations of all scores in a distribution about the mean
- the mean of the squared deviations from the mean of a distribution
|
|
|
Term
|
Definition
- the square root of the variance
- most commonly used index of dispersion
- sensitive to all the scores in a distribution and therefore represents an exceedingly good index of dispersion
|
|
|
Term
|
Definition
• A graphic presentation used to summarize data • Similar to bar graph except horizontal scale is continuous |
|
|
Term
|
Definition
Presentation used to summarize data, especially nominal data, in which the display consists of rectangles or bars, each having a height equal to the frequency of the score or group it represents |
|
|
Term
|
Definition
- the science of drawing conclusions about population characteristics based on sample descriptions
- allows a researcher to use the statistics derived from a randomly selected sample to estimate population parameters
|
|
|
Term
4 Assumptions of Inferential Statistics |
|
Definition
- 1. All sample data are to be selected randomly, insofar as possible, from some well-defined population
- 2. The characteristics of each random sample drawn from a population are related to the true population parameters
- 3. Multiple random samples drawn from the same population yield statistics that cluster around true population parameters in predictable ways
- 4. We can calculate the sampling error associated with a sample statistics, estimating how far a population parameter is likely to deviate from a given sample statistic
|
|
|
Term
|
Definition
- do not measure entire population
- instead we sample to represent the population
- descriptive statistics give us indexes of the sample
- sampling statistics give us estimates of population indexes
|
|
|
Term
terms used to discuss indexes of populations |
|
Definition
- statistics
- paramter
- statistical inference
|
|
|
Term
|
Definition
- the science of describing and reasoning from numerical data
- sample characteristics such as means and standard deviations
|
|
|
Term
|
Definition
numerical characteristic of a population |
|
|
Term
|
Definition
the process of estimating parameters from statistics |
|
|
Term
different kinds of distributions |
|
Definition
|
|
Term
|
Definition
- distribution of characteristics observed in one samples from a population
- Although ____ can be derived theoretically, their nature is easily understood by imagining how we might empirically construct one
- Not limited to sample means but can represent the distributed values of other samples statistics: median scores, percentages, etc.
- If all possible random samples of size n are selected from a given population, we have the basis for a theoretical ________
|
|
|
Term
sample distribution means |
|
Definition
- 1. Has mean equal to the mean of the parent population
- 2. Has standard deviation that is related to that of the parent population and is calculated by (see pg. 106)
- 3. Is normally distributed when the parent population is normally distributed and is approximately normal even when the parent population is not normally distributed if the samples are larger (size n = 30 or more)
|
|
|
Term
|
Definition
characteristics we expect (or assume) to exist in the population |
|
|
Term
Distribution in terms of probability: The Normal Distribution Curve |
|
Definition
- Three bell distributions viewed as percent of time something occurs (probability) instead of the frequency of occurrence
- this allows generalizing to samples of different sizes
- a relation between the standard deviation (s) and (%) probability of this s occurring
- + or – 1 standard deviation = 68% of the time
- if you know population parameters, you can be 95.2% confident
- you can start to make estimates of what someone’s score will be
|
|
|
Term
Normal Curve as Population Distribution |
|
Definition
if we knew the parameters of a population, we could estimate the chance of scores (in a sample) being within a certain range |
|
|
Term
Normal Curve as Sampling Distribution |
|
Definition
- if the sampling distribution fits the normal curve, we can estimate the chance that one sample mean would be in a certain range
- this range is viewed as standard error of the mean
- we can understand this as sampling error
- When the standard deviation of the parent population is not known we use the sample standard deviation s to estimate standard deviation
|
|
|
Term
|
Definition
the standard deviation of a sampling distribution; a description of the probable deviations of a population parameter from a given sample statistic at some level of confidence. |
|
|
Term
aspects of sampling error |
|
Definition
- Can be used to specify confidence intervals and to estimate the value of a population parameter
If we have a random sample from a normal distribution, we an assume a particular relationship between the sample characteristic (e.g. the sample mean) and the population characteristic (e.g. the population mean) laws of chance say our sample mean will not exactly equal the population characteristic (mean)
- if we can estimate the s (standard deviation) of a statistic (e.g., mean) then we can estimate the range in which we expect the population characteristic to fall
|
|
|
Term
|
Definition
- “predict” or conjecture that important relationships exist between populations of communication phenomena.
- Although the projected relationships can take many forms, research hypothesis often predict that two or more populations are different in one or more respects.
- two population means will differ
- These two groups will differ on a certain characteristics
- (M1 – M2 does not equal 0)
- directional or non-directional
|
|
|
Term
|
Definition
- The antithesis of a research hypothesis
- Researchers confirm or disconfirm their research hypotheses by assessing the “truth” of the null hypothesis: one may reject the null hypothesis or fail to reject it. By rejecting the null hypothesis, one accepts the research hypothesis as a default option.
- population means will not differ
- (M1 – M2 = 0)
- only one population
- with different concepts of variables that are related to each other
- thus, any differences observed are due to sampling error
- we would expect some error by chance
|
|
|
Term
logic of testing: Statistical reasoning |
|
Definition
- allows us to calculate the probability that observed differenced (e.g. between means) are due to error
- If this probability is low, we can assume this is not due to error
- Thus, we reject the null hypothesis, and accept the research hypothesis
|
|
|
Term
SAMPLING DISTRIBUTION OF THE NULL HYPTOTHESIS |
|
Definition
- If ________ were true, the average difference of all comparisons (e.g. M1 – M2) should equal zero
- One tailed test
- Directional hypothesis use one-tailed test
- Two tailed test
- Non-directional hypothesis use two-tailed test
|
|
|
Term
|
Definition
- directional hypothesis
- comparing 2 means with a t test or a z test
- one group is going to be higher than the other
vs.
• hypothesis states there will be a difference
|
|
|
Term
|
Definition
Rejecting the null hypothesis when it should be accepted (in reality, the null was true) |
|
|
Term
|
Definition
Failing to reject the null hypothesis when it should be rejected (in reality, the null was NOT true) |
|
|
Term
5 Steps in Hypothesis Testing |
|
Definition
- 1. state research & null hypotheses
- 2. State the probability of error
- choose the significance level (i.e., p < .05)
- 3. determine the criterion or decision rule
- choose appropriate statistic
- defining the critical region
- 4. Perform calculations on the test statistic
- 5. make decision on null hypothesis
- choose to reject or fail to reject the null hypothesis
|
|
|
Term
|
Definition
1. Systematic or between-variable
&
2. error or within-variable
- Allow us to test hypotheses that significant differences exist between the populations yielding sample groups
- The greater the magnitude of systematic variance relative to error variance, the more confident we can be that real population differences exist
|
|
|
Term
|
Definition
- Also called mean square
- Measures the dispersion of scores about the sample mean
|
|
|
Term
|
Definition
- The variation among scores that is due to some influence that “pushes” scores in one direction or another
- Often called between group variance
|
|
|
Term
|
Definition
- Fluctuations in group scores that are due to random or chance factors
- Scores fluctuate up and down in a random-like fashion
- Variables such as carelessness, fatigue, situational distractions
|
|
|
Term
|
Definition
- A set of methods for assessing significance of differences among two or more population, independent as well as related, means based on data derived from independent or related samples
- Resembles t and z procedures in that observed differences among groups are compared to error differences or fluctuations attributable to chance
- Takes the form of a ratio that compares systematic variance with error variance
- Comparison yields F ratio
- The greater the value of F, the more likely it is that observed group differences reflect real differences in the populations from which the sample groups were selected.
- F sampling distribution appears in Appendix E
|
|
|
Term
|
Definition
- Tests group differences that are attributable to a single independent variable, called a factor.
- Factor: an input independent variable that relates to an output or dependent effect.
- F sampling distribution appears in Appendix E
- Lists the critical values of F required to reject the null hypothesis that no population difference exists among groups.
- The larger the F value, the greater the probability that directional and non-directional differences among sample means cannot be attributed to chance
- Must take into account (df)
- The F ratio has degrees of freedom associated with both its numerator (systematic variance) and its denominator (within-groups or error variance).
- Numerator degrees of freedom (dfn) are equal to the total number of sample groups (k) minus 1
- Denominator degrees of freedeom (dfd) equal the combined number of scores in all groups (N) minus the number of groups (k).
|
|
|
Term
|
Definition
- analyzes group differences that are attributable to more than one independent variable or factor.
- factorial ox designs
- (see table 7.4)
|
|
|
Term
|
Definition
- Parametric test
- Appropriate for estimating significance of difference between small samples
- A test for assessing the significance of difference between two population means based on data derived from two samples, of which at least one sample is small, typically containing fewer than 30 scores.
- Takes form of a ratio, with the observed mean difference between two groups representing the numerator and an estimate of sampling error serving as the denominator
- used to estimate whether the difference observed between two sample means reflect differences we could expect in their parent populations
|
|
|
Term
z test for mean differences |
|
Definition
- Parametric test
- For large samples
- Assesses differences between two population means based on data derived from large independent random samples, typically groups containing at least 30 scores each
- Takes the form of a ratio, with observed mean differences in the numerator and estimated chance differences in the denominator.
- Does not take sample size (df) into account when reporting critical values required to reject null hypotheses
- Because a z test assumes sample sizes sufficiently large that random error does not fluctuate widely.
|
|
|
Term
z test for proportional differences |
|
Definition
- Parametric test
- Conceptually identical to z test for mean differences however,
- Calculation formula is modified to accommodate frequency data.
- Designed to assess differences between two independent freuencies derived from large samples, typically groups containing 30 or more observations
- Takes the form of a ratio, with observed frequency differences in the numerator and estimated chance differences in the denominator
- To estimate whether the z value reflects population differences, we must refer to a z sampling distribution or normal curve
- distribution does not consider sample size (df) when reporting critical values required to reject null hypotheses.
|
|
|
Term
|
Definition
- Nonparametric test
- Can assess differences between two or more independent groups with frequencies ranging from moderately small to very large.
- Can perform operations with frequency data that are analogous in function and complexity to single-factor as well as multiple-factor analysis of variance.
- Commonly employed test statistic for frequency differences
- Takes the form of a ratio between observed frequency differences and random error differences
|
|
|
Term
|
Definition
- Enables researchers to test for significance of difference among the categories derived from such samples.
- In such single sample designs, expected frequencies (those we could expect by chance) often represent all observed frequencies divided by the number of categories into which these frequencies fall.
|
|
|
Term
• CONTINGENCY TABLE ANALYSIS (multiple sample chi-square) |
|
Definition
- Nonparametric test
- Tests for significance of difference among frequencies of two or more independent samples.
- Analogous to multiple factor analysis of variance
- Chi-square samples represent factors (independent variables) whose associated frequencies we wish to test for differences.
- To determine where specific differences lie, we can compute systematic chi-square comparisons between or among many of the table frequencies.
|
|
|
Term
|
Definition
A straight line relationship between two variables |
|
|
Term
positive (direct) linear relationship |
|
Definition
- Indicated by a positive correlation coefficient
- occurs when two variables rise or fall together in a systematic fashion
|
|
|
Term
negative (inverse) linear relationship |
|
Definition
- Indicated by a negative correlation coefficient
- occurs when one variable systematically increases as another declines
|
|
|
Term
|
Definition
- Does not follow a straight line pattern
- direction of relationship between variables changes at some point
- Two common types:
- Inverted-U correlation
- Occurs when two variables initially increase together, after which one continues to increase as the other systematically declines
- U-Shaped Correlation
- Occurs when one variable initially increases as the other declines, after which the two increase together
|
|
|
Term
inverted-U curvilinear correlation |
|
Definition
Occurs when two variables initially increase together, after which one continues to increase as the other systematically declines |
|
|
Term
u-shaped curvilinear correlation |
|
Definition
Occurs when one variable initially increases as the other declines, after which the two increase together |
|
|
Term
BIVARIATE CORRELATION COEFFICIENT (r) |
|
Definition
- A numerical summary of the direction and strength of a relationship between two variables
- a) Plus and minus signs indicate direction
- b) Value between +1.00 and -1.00 indicates strength
- Research hypotheses predicting a significant correlation between two population variables
- ρ ≠ 0
- ρ (rho)
- denotes population correlation coeffcient corresponding to the sample coeffcient r.
- Null hypothesis ρ = 0
- States that no linear relationship exists between the population parameters
|
|
|
Term
interpretation of bivariate correlation coefficient |
|
Definition
- 2 Steps required to test null hypothesis
- 1. Compute indices of sample covariance
- for instance, a coefficient of correlation (r) or its corresponding coefficient of determination ( r squared )
- 2. Compute indices of sample covariance (r or r squared) to an estimate error of covariance
- ^ to estimate whether sample covariation reflects population relationships
- error covariance represents shared sample variance that could occur by chance.
- The greater the magnitude of observed relative-to-error covariance, the greater the likelihood that a systematic bivariate relationship exists in the population
|
|
|
Term
interpreting linear correlation coefficients |
|
Definition
- Begin with research hypothesis prediction
- Significance of rxy tested by using a modified t-formula
- Statistical significance does not mean "important"
|
|
|
Term
interval level data: Pearson product-moment coefficient of correlation |
|
Definition
- Can be used to test null hypothesis that ρ = 0
- Because _______ assumes a linear relationship between two variables, the researcher should test the raw scores appearing in the table for linearity before computing a correlation coefficient
- Once linearity has been confirmed, compute a coefficient of correlation
- _______ is a ratio between the two variables’ observed covariance and their combined variance
- A _______ and its derivative ___ squared register the amount of total variance that two variables actually share
|
|
|
Term
|
Definition
- Chi-square methods assess correlations between _____ variables
- 2 Methods to assess magnitude and direction of the relationship
- Pearson’s phi coefficient (φ)
- Cramer’s V coefficient
|
|
|
Term
Pearson’s phi coefficient (φ)
&
Cramer’s V coefficient
|
|
Definition
- Based on a simple two-stage logic
- Stage one: compute chi-square value
- Establishes whether a statistically significant relationship exists between two categorical variables
- Stage two: observed chi-square is compared to the maximum chi-square the two variables are capable of producing
- Each method yields an analogue of a correlation coefficient, registering relationship strength based on the ratio between observed chi-square and maximum chi-square
- This operation is analogous to Pearson’s r, which compares observed covariance with the total variance associated with two variables
- numerator = observed chi-square
- denominator = maximum chi-square
|
|
|
Term
PEARSON’S PHI COEFFICIENT (φ) |
|
Definition
- Applicable only to 2 X 2 contingency tables
- Computes meaningful correlation coefficients between variables having two and only two frequency categories each
- The closer an obtained φ is to +1, the greater the strength of the relationship.
- Phi coefficient like Pearson’s r can be squared to produce a coefficient of determination estimating the percentage of variance shared by two variables
- Should be used only when dealing with dichotomous variables
|
|
|
Term
|
Definition
- Overcomes phi’s contingency table size restrictions, and therefore, can process variables having more than two frequency categories
- The closer a ______ is to +1, the greater the strength of association between two variables
|
|
|
Term
Non-linear correlation: Eta coefficient |
|
Definition
- A positive number that registers the magnitude but not the direction of a curvilinear relationship
- Direction (U-Shaped, Inverted-U) can be determined by constructing scattergrams
|
|
|
Term
coefficient of determination |
|
Definition
- A measure of the percentage of combined variability that is common to two variables
- A numerical indicator showing the variance in one variable can be explained by knowledge of another variable
|
|
|
Term
Multiple correlation coefficient (Ry.xz) |
|
Definition
- Estimates the amount of variance in a criterion measure that is explained or accounted for by its linear relationship with predictors
- indicates the direction and strength of the relationship registers the strength of association between criterion and predictor variables
|
|
|
Term
|
Definition
a statistical procedre for measuring the strength of association between a set of independent (predictor) variables and a single dependet (criterion) variable |
|
|
Term
coefficient of multiple determination (R squared) |
|
Definition
indicates the percentage of variability in a criterion that is explained by the predictor variables |
|
|
Term
2 important principles of multiple correlation |
|
Definition
- 1. The amount of explained variance in a criterion variable increases as the size of the correlation between the predictors and the criterion goes up
- 2. The amount of explained variance in a criterion variable declines as the size of the correlation between predictor variables increases.
- Occurs because highly correlated predictors do not make wholly independent contributions to criterion variance. Rather, some portion of any correlated predictor’s contribution is conjoined with the contributions of the other predictors to which it is related
- ^ These dual principles admonish researchers to select predictors that correlate strongly with the criterion variable, but relatively poorly with one another.
|
|
|
Term
computing multiple correlation and determination coefficients |
|
Definition
- 1. Compute bivariate correlation coefficients between each pair of variables
- A numerical summary of the direction and strength of a relationship between two variables
- 2. Derive a coefficient of multiple determination and its associated coefficient of multiple correlation using Pearson’s r formula.
- ^ if this sample value reflects a systematic relationship in the parent population, we can reject the null hypothesis that the multiple coefficient of determination is zero
- F test is appropriate statistic for testing non-directional hypothesis
- Applied to multiple correlation, the F ratio compares the variance contributed but he predictors to the criterion variable, with the remaining variance not explained by the predictors
- Thus, unexplained variance is an estimate of error variance
- (see table 8.3)
- must refer to the F sampling distribution in Appendix E
|
|
|
Term
|
Definition
- While correlation describes relation of two or more known variables, ______ predicts an unknown (probable) criterion (dependent variable) based on known values of a predictor variable and the relationship between these two variables
- An exceedingly useful tool for projecting the likely values of an unknown criterion measure based on our knowledge of its associated predictors.
|
|
|
Term
|
Definition
- Y = a + bX + e
- mathematical description of line of best fit
- Y is criterion
- X is a predictor
- a represents the Y intercept
- represents the point at which a best fitting line intersects the vertical criterion axis.
- Registers the value of the criterion variable (Y) when the predictor (X) is zero
- We can assume that the criterion Y will always be “a” units greater than the value of the predictor X.
- b represents the slope
- tells us how many units a criterion variable Y increases with each unit increase in the predictor X.
- given any value of a, the steeper the best fitting regression line, the greater the magnitude of b.
- The standard error of the estimate (e)
- provides an estimate of the accuracy of the predictions
|
|
|
Term
5 Ethical Principles of Research |
|
Definition
- a set of agreed upon standards of “goodness and badness” to apply to the four research components: means, ends, motives, and consequences.
- 1. Universalism
- components can be evaluated according to predetermined standards set by a scholarly community.
- These standards are impersonal and therefore not peculiar to any one researcher. Rather, they are derived from empirical data and previously accepted knowledge
- 2. Communality
- compels all researchers to share their research findings, including all components, freely and honestly with all other members of the research community.
- Researchers are required to report fully and accurately their methods and results, including all shortcomings of a given piece of research.
- Findings that disconfirm one’s hypothesis must be reported as fully and honestly as those confirming it
- 3. Organized skepticism
- researchers must be critical of their own research as well as the research of others
- all components of all scholarship including one’s own must be scrutinized for errors, omissions, and biases – both deliberate and inadvertent
- the best scholar is a dedicated skeptic who takes nothing for granted, but rather questions all intellectual inquiry, both the conventional and the controversial, with equal intensity and vigor
- 4. Honesty
- self
- honesty implies that the truth is reported as the reporter understands it, and the practice of being honest begins, as the adage suggests, “at home”
- researchers must take care to “see” things in the empirical world as they are, not as they would like them to be
- self-awareness enables a researcher to report personal biases so that readers can judge for themselves their potential impact on research findings
- scholarly community
- researchers are compelled to be honest with other members of the research community
- participants
- researches are obligated to minimize deception, either by stating their research aims and revealing their true identities at the outset, or alternately, by fully debriefing participants who were initially mislead about the research or the researcher’s identity
- 5. Respect
- researchers must protect all the basic human and civil rights of those who serve as research subjects
- rights include:
- right to informed and voluntary participation
- right to freedom from physical, social, and psychological harm
- right to privacy of thought and action
|
|
|
Term
Obligations to resarch participants |
|
Definition
- 1. Voluntary participation and informed consent
- subject’s participation should be informed and voluntary
- informed consent
- voluntary agreement to participate in research after having been given full and accurate information about the nature of the research project, including all potential physical, social, and psychological risks
- obtained in writing
- coerced participation is forbidden not only by prodessional ethics, but also by federal regulations that are legally binding on all universities and other institutions receiving government financial support
- 2. Freedom from harm
- research must not expose participants to physical, social, or psychological harm
- 2 risk factors
- 1. Experimental manipulations have the capacity to “create” behaviors that are socially questionable
- 2. Certain types of communication research, including survey studies and naturalistic observations, may uncover socially or psychologically sensitive materials
- information that could, if disclosed, place participants in legal or social jeopardy
- researchers contemplating socially sensitive matters must be able to guarantee the anonymity or confidentiality of results; of this is impossible, plans for such research must be abandoned
- 3. Anonymity and confidentiality
- a research participant is considered anonymous when the researcher cannot pair a given response with a given respondent
- confidentiality means that the researcher is able to identify a given person’s responses but essentially promises not to do so publicly.
- Thus anonymity implies confidentiality, but the reverse is not the case
- When researchers cannot guarantee non-disclosure, anonymity must be granted if participants desire it
- Anonymity is usually assured by assigning participants a code that carries no self-identifying data
- 4. Honesty and the practice of deception
- two primary circumstances
-
- 1. Experimenters often develop cover stories to mislead subjects about experimental hypotheses
designed to minimize the biasing effects that can result when participants learn a study’s true purpose
- 2. Naturalistic researchers who adopt participatory roles sometimes surreptitiously observe and record ongoing communication without revealing their true identities
- the aim is to avoid reactivity effects, that is, behavioral alterations resulting when people know they are being observed
- Do not use deception unless you have to
- If you do have to, you must get informed consent afterwards
- 5. Privacy of thought and action
- two principal circumstances
- 1. Naturalistic researchers may sometimes surreptiously record communications without the subjects knowledge and consent
- 2. Privacy invasions occur when researchers either deliberately or inadvertently disclose information about research participants that they have promised to keep in confidence
- researchers are obligated both morally and legally to respect people’s rights to privacy in matters of thought as well as action.
|
|
|
Term
political restraints on research |
|
Definition
Refer to the impact of prevailing political ideologies and exigencies on the mean, ends, motives, and consequences of research |
|
|
Term
3 types of political restraints on research |
|
Definition
- 1. The politics of scientific research
- Activist research
- Topical or issue-oriented
- With researchers focusing on topics of concern to identifiable groups like women and minorities
- Sponsored (applied) research
- Often put to such practical uses such as
- gathering demographic and opinion data, which allow political candidates to appeal effectively to the needs of the voting public;
- assembling information on the characteristics of television audiences, which are used to maximize a network’s share or the advertising dollar;
- and collecting social data on the personal and cultural effects of media violence or eroticism
- 2. The politics of humanistic research
- Humanistic activism
- Certain forms of humanistic research explicitly incorporate political ideologies into their critical and interpretative judgments.
- Historical "truth" criticism
- Often entails ideological judgments
- 3. Ethics, politics, and intellectual integrity
- political influences in both scientific and humanistic research have the capacity to compromise an to enrich communication scholarship
- when political ideologies affect substantially one’s choice of investigative means or ends, but the researcher fails to recognize and report this circumstance, intellectual integrity is jeopardized.
- Because readers are unable to evaluate the reported research in the context of the political influences that shaped it
- If researchers fully and accurately detail their ideological biases, no ethical comparative is compromised
- In such cases, readers have the opportunity to evaluate reported findings within the context of the political philosophy giving rise to them.
|
|
|