Term
What are some unethical activities? By researcher |
|
Definition
1. Unethical pricing: promise low price, then jack it up 2. Fail to provide (promised) incentives to research subjects 3. Abuse respondents; promise short survey that turns into an hour; pass along information without permission; collect information without permission 4. Selling useless research services 5. Interviewers make up data (“curbstoning” or “rocking chair” interviewing)....Interviewers create “phantom” data (duplicate actual data to boost sample) |
|
|
Term
Sampling vs non sampling error |
|
Definition
Sampling: statistically speaking, the difference between the sample results and the population parameter - assuming the perfect survey, sampling frame, execution, and respondants, we still have error due to sampling - SAMPLING ERROR BECOMES SMALLER WITH LARGER SAMPLE
Non sampling error (systematic): a variety of errors that are not related to sampling error and/or size - leads to systematic variation... skewed more socially desirable responses - is controllable, good survey design and procedures - cannot be estimated (SAMPLING ERROR CAN BE) - Are interdependant |
|
|
Term
Sampling errors - population specification (frame) error - sample selection error - sample frame error |
|
Definition
pop spec: your population is all republicans, but you define your population as republicans in WA
Sample selection error: when an inappropriate sample is selected from the desired population, may be due to either poor sampling procedures or intentionally excluding certain people from the sample.
Sample frame error: sample frame- list of potential people in your target population - sample frame error: when the sample fram is not representative of your pop (eg only those with email addesses) |
|
|
Term
Who is Paco Underhill? and when is observation needed? |
|
Definition
The "king" of observation: WHY WE BUY- THE SCIENCE OF SHOPPING "-envirosell- - When the respondant may not be able to accurately recall the frew of a behavior, and/or may be inclined to give misleading answers - when the response question is a behavior - when the behavior in question is relatively frequent and occurs within a limited time frame - when the behavior in question can be observed |
|
|
Term
Observation method: advantages |
|
Definition
gain data on actual behavior (rather than self reported behavior which may be biased) |
|
|
Term
|
Definition
generalizing from a limited number of observations can be difficult
may be difficult to understand why the behavior occurred |
|
|
Term
|
Definition
a body interconnected propositions about how a phenomenon works (recall animosity model) |
|
|
Term
Hypothesis testing: Null (dull) hypothesis H0 |
|
Definition
-nothing interesting is going on -any differences we are observing are completely due to chance |
|
|
Term
Alternative Hypothesis H1 |
|
Definition
-something interesting is going on -differences in DV are due to IV |
|
|
Term
|
Definition
x, the cause, the predictor - the variable you "manipulate" ((good vs. bad aroma in store) |
|
|
Term
|
Definition
What happens after you manipulate the IV (sales of a product) |
|
|
Term
|
Definition
-variables that you dont allow to vary along with the IV -If any variable covaries with the IV, then there is confound ( if music systematically caries along with aroma, then you cant tell if its the aroma or the music tha influences sale |
|
|
Term
Extraneous Variables (or noise) |
|
Definition
"stuff happens" during an experiment but it evens out across the levels of the independent variable (different music at different times, but it doesnt systematically vary by the level of the IV. |
|
|
Term
|
Definition
-when the researcher is examining the impact of two IV on a dv -can have two main effects (overall impact of each IV) and an interaction (combined effect of two IVs SLIDE 40------------**** |
|
|
Term
|
Definition
-under what conditions is a relationship stronger/weaker - when the effect of one IV (service failure) on the DV (negative word of mouth) depends on the level of another IV (trait hotility) |
|
|
Term
|
Definition
-like a combined shot in pool - when the effect of one IV on the DV occurs through an "intermediary" variable (think cue ball hits one ball hits eight ball) example: assume a person experiences a service failure, they infer a negative motive, feel angry, and spread NWOM. Here, ANGER is the mediator between inference of Negative motive and NWOM ---SLIDE 42--- |
|
|
Term
|
Definition
internal external construct |
|
|
Term
|
Definition
the extent to which conclusion drawn from a study are true |
|
|
Term
|
Definition
when a researcher can clearly identify cause and effect relationships |
|
|
Term
|
Definition
the extent to which what you find in your study can be generalized to your target population |
|
|
Term
|
Definition
-extent to which your constructs of interest are accurately and completely identified (measured) -in other words, the extent to which you are actually measuring what you say you are measuring(your sensation seeking scale really does measure the true construct of sensation seeking) |
|
|
Term
Threats to Internal Validity (7) |
|
Definition
History effect maturation effect testing effect instrumentation effect statistical regression selection bias mortality |
|
|
Term
History effect Maturation effect Testing effect |
|
Definition
history effect: when something happens during the course of a study that affects the dependant variable
maturation effect: similar to history effect; something happens over time (changes in the individual) that affects DV
Testing effect: in a pretest-posttest design, you affect the time 2DV by pretesting at time 1; the simple act of measuring the DV at time 1 changes DV at time 2 |
|
|
Term
Instrumentation effect statistical regression selection bias mortality |
|
Definition
Instrumentation effect: the mere fact that you are measuring something (observing behavior) changes the behavior
Statistical Regression: when you select groups based on extreme scores, they regress toward the mean, changing your grooups
Selection Bias: when groups (control, eperimental) differ before the experiment manipulation; creates unequal groups
Mortality: some drop out or die, and these drop outs change scores in the condition those who stick around may be dfferent than those who drop out |
|
|
Term
|
Definition
Each sampling unit has a known probability of being included in the sample |
|
|