Term
|
Definition
"unlucky" difference between sample value and true value. We assume sample n represents poulation N, but it may not. To reduce random error, increase sample size. With homogeneous populations, chance of random error is low. |
|
|
Term
|
Definition
aka bias or non-sampling error. Difference between info being sought and info obtained by process (due to survey design or execution) |
|
|
Term
Types of systematic Error |
|
Definition
Sample Design + Frame Error (phonebook doesn't include cell phone users + Pop. Spec. Error (choose city population when you need entire metro area) + Selection Error (not following sampling procedure) Measurement Error + Surrogate Info Error (income bracket doesn't break into important groups) + Interviewer Error + Measurement Instrument Bias (confusing questionnaire + Processing Error (manual data entry) + Nonresponse bias + Response bias (respondents answer falsely) |
|
|
Term
|
Definition
* Door-to-door interviews * Executive interviews * Mall-intercept interviews * Telephone interviews * Self-Administered questionnaires * Mail Surveys |
|
|
Term
Determination of survey method |
|
Definition
* Sampling precision * Budget * Requirements for respondent reactions * Quality of data * Length of questionnaire * Incidence rate * Structure of questionnaire * Time available to complete study |
|
|
Term
Advantages & Disadvantages of Online Surveys |
|
Definition
Advantages: * Quick * Cheap * High response rate * Can do longer surveys
Disadvantages: * No way to probe for more information * Hard to confirm validity of responses |
|
|
Term
|
Definition
1) Identify concept of interest 2) Develop construct 3) Define concept constitutively 4) Define concept operationally 5) Develop a measurement scale 6) Evaluate reliability/validity of scale |
|
|
Term
|
Definition
Dictionary definition. Relates one concept to others. Statment of the meaning of the central idea under study and establishes boundaries |
|
|
Term
|
Definition
Statement of the observable characteristics of the concept that will be measured and the process for assigning a value to the concept |
|
|
Term
|
Definition
test-retest: test a second time under conditions as similar as possible to original conditions
equivalent form: two similar forms produce closely correlated results
split half (type of internal consistency reliability) - split survey in two and see how the results from the two correlate. |
|
|
Term
|
Definition
Face: *seems* to measure what it should measure
Content: does scale provide adequate coverage?
Criterion-related: + Predictive: Ability to predict future level of a given criterion (e.g. SAT=future success in college) + Concurrent: ability to measure current levels (e.g. pregnancy test=you are currently pregnant)
Construct: + Convergent: 2 tests that claim to measure same concept have similar results + Discriminant: 2 tests that claim to measure opposite concepts have opposite results |
|
|
Term
|
Definition
scale with a graphic continuum, anchored by two extremes |
|
|
Term
|
Definition
scale with a limited number of ordered categories |
|
|
Term
Scales: Paired comparison |
|
Definition
measurement scale that asks the respondent to pick one of two objects in a set (Coke vs. Pepsi, etc.) |
|
|
Term
Scales: Semantic differential |
|
Definition
measurement scale that asks users to rate based on dichotomous words or phrases that can be used to describe the subject, calculates the mean. Ex. easy to use vs. difficult to use |
|
|
Term
|
Definition
one characteristic rated on a scale from +5 to -5 |
|
|
Term
|
Definition
measurement scale in which respondent marks level of agreement with a particular statement |
|
|
Term
|
Definition
scale used to measure a respondent's intention to buy or not buy a product |
|
|
Term
Scales: Net promoter score (NPS) |
|
Definition
Scale that measures likelihood to recommend |
|
|
Term
Questionnaire Design: Dos |
|
Definition
* Be brief * Be clear * Be grammatically simple * Focus on a single topic/question (no double-barreled questions) * Use respondents' core vocabulary |
|
|
Term
Questionnaire Design: Don'ts |
|
Definition
* Bias respondent * Assume criteria that are not obvious * Ask hypothetical questions * Ask for specifics when only generalities can be remembered |
|
|
Term
|
Definition
Screeners first; Easy questions to hard; demographics at the end |
|
|
Term
Types of marketing research: Less well-defined research objectives to more well-defined research objectives |
|
Definition
Exploratory (ex. qualitative) Descriptive (ex. surveys) Causal (ex. experiments) |
|
|
Term
|
Definition
Percentage of population that qualify as respondents |
|
|
Term
Differences between qualitative and quantitative research |
|
Definition
Qualitative research is not based on measurable statistics (quantitative is), focuses on subjective observations and analysis. Qualitative research is used when research objectives are not well defined, for concept testing.
Qualitative: * Small sample size * Probing questions * Substantial information per respondent * Requires interviewer with special skills * Degree of replicability is low * Researcher training is in soft sciences (e.g. psychology) * For exploratory research
Quantitative: * Large sample size * Not as probing questions * Limited information per respondent * Interviewer need not have special skills * Degree of replicability is high * Researcher training is in hard sciences (e.g. statistics) * For descriptive research |
|
|
Term
Advantages to qualitative research |
|
Definition
* qualitative studies are usually cheaper than quant studies * excellent means of understanding in-depth motivations and feelings of consumers * Can improve efficiency of qaunt. research |
|
|
Term
Disadvantages to qualitative research |
|
Definition
* Not statistically representative of population * Cannot statistically measure attitudes and behaviors captured * Cannot capture important but precise differences * Anyone can claim to be an expert |
|
|
Term
Qualitative methodologies |
|
Definition
* Focus groups * Depth interviews * Hermeneutic research - interpretation through conversations (e.g. speech bubble cartoons) * Delphi method - several rounds with experts * Projective tests - technique for tapping respondents' deepest feelings by having them project feeling onto an unstructured situation |
|
|
Term
|
Definition
* Word association * Analogy * Personification * Sentence/Story completion * Cartoon tests * Photo sorts * Customer drawings * Storytelling * 3rd person technique * Maison Haire shopping list |
|
|
Term
Adv. and disadv. of focus groups |
|
Definition
Pros: * Candor of participants * Generates fresh ideas and brainstorming * Client can observe on site (provide immediate feedback) * Can enhance other data collection methods * Can be executed quickly
Cons: * Expense * Time * High level of expertise needed * Interpretation is subjective * Often misused as representative of pop. |
|
|
Term
Different approaches for observation |
|
Definition
* natural vs. contrived * open vs. disguised * structured vs. unstructured * human vs. machine * direct vs. indirect (e.g. past behavior) |
|
|
Term
Types of human observation studies |
|
Definition
* Ethnographic research * Mystery shoppers * One-way mirror observation * Audits (e.g. pantry audits, garbology) * Shopper patterns/behavior * Content analysis |
|
|
Term
Types of machine observation studies |
|
Definition
* Traffic counters * Physiological measurement * EEG * Galvanic skin response * Pupilometer * Voice pitch analysis * Opinion and behavior measurement * People reader (measures where eyes track when looking at print ads) * Rapid Analysis measurement (dial to measure opinion) * GPS (measures exposure to outdoor advertising - billboards, buses, etc.) * People meter - measures what tv audiences are watching * Scanner-based research * Laser scanners - which products purchased * Behavior scan - shop with an ID card * Infoscan - network of store scanners |
|
|
Term
Advantages of observation compared with other types of research |
|
Definition
* Quick data collection * Avoids interviewer bias * See what people actually do (as opposed to what they say they do) |
|
|
Term
Disadvantages of observation compared with other types of research |
|
Definition
* Researcher does no learn motives or demographics * Time-consuming and expensive |
|
|
Term
Three determinants of causality |
|
Definition
If A cause B: * concomitant variation (any change in A will cause change in B) * temporal sequence (A changes before B) * non-spurious association (no other explanation for B than A) |
|
|
Term
|
Definition
Extent to which competing explanations for experimental results observed can be ruled out (higher in laboratory settings) |
|
|
Term
|
Definition
Extent to which causal relationships measured in an experiment can be generalized to outside persons, settings, and time (higher in field settings |
|
|
Term
|
Definition
factors one does not control but has to live with. For example: * weather * competition's actions * economy * societal trends * political environment |
|
|
Term
Field vs. lab experiments |
|
Definition
Field - high in external validity, low in internal validity; creates realism of env, no control over spurious factors
Lab - Higher in internal validity, lower in external validity; easier to control vars, limited applicability to actual marketplace |
|
|
Term
Threats to internal validity |
|
Definition
* History * Instrument variation * Selection bias * Mortality * Testing effect * Regression to mean |
|
|
Term
|
Definition
One shot: X O1
One group pretest - posttest: O1 X O2
Static group comparison: X O1 O2 |
|
|
Term
True experimental designs |
|
Definition
Before and After with Control: (R) O1 X O2 (R) O3 O4
After Only with control: (R) X O1 (R) O2
Solomon Four Group: (R) O1 X O2 (R) O3 O4 (R) X O5 (R) O6 |
|
|
Term
Quasi experimental designs |
|
Definition
Interrupted time-series: O1 O2 O3 O4 X O5 O6 O7 O8
Multiple time series: O1 O2 O3 O4 X O5 O6 O7 O8 O1 O2 O3 O4 O5 O6 O7 O8 |
|
|
Term
What problems do quasi-experiments solve? |
|
Definition
No control over scheduling of treatments OR can't assign subjects to random groups |
|
|
Term
Why aren't pre-experimental designs considered to be true experimental designs? |
|
Definition
Offer little or no control over extraneous vars |
|
|
Term
What makes true experimental designs better than pre- or quasi-experimental designs? |
|
Definition
Use experimental and control group with randmization |
|
|
Term
What are the 4 methods to control extraneous variables? |
|
Definition
1) Randomization 2) Physical control 3) Design control 4) Statistical control |
|
|