Term
1a. what is the parameter to consider when determining the ability of a test to detect disease? -what is that mathematically? |
|
Definition
sensitivity -# true positives/# ppl with the disease |
|
|
Term
1b. what test for disease screening? and what may occur with this kind of test? |
|
Definition
test w/ high sensitivity -may have false positives, but misses few ppl w/ the disease, so low false-negative rate |
|
|
Term
2a. what is the ability of a test to detect health (or nondisease) -what is that mathematically? |
|
Definition
specificity -# true negatives / # ppl w/o disease |
|
|
Term
2b. what test is used for disease confirmation? what might occur with that? |
|
Definition
test w/ high specificity -false negatives occur, but won't call anyone sick who is actually healthy (low false-positive rate) -ideally want high specificity and high sensitivity (otherwise ppl w/ disease may be called healthy) |
|
|
Term
4. when a test is + for a disease, what will measure how likely it is that the patient has the disease? -what is that mathematically? |
|
Definition
positive predictive value (PPV) -number of true positives / total # ppl with positive test |
|
|
Term
4b. what does the PPV depend on? how does PPV change related to specificity and sensitivity changes? |
|
Definition
depends on prevalence of disease (higher prevalence, higher PPV) -increased sensitivity, more false-positives (bigger + tests compared to true positives), so lower PPV -increased specificity, smaller # of positive tests, so increased PPV |
|
|
Term
5a. when a test is negative, what measures how likely it is that the pt. is healthy and does not have disease? -what is that mathematically? |
|
Definition
negative predictive value (NPV) -# true negatives/ #negative tests |
|
|
Term
5b. what does the NPV depend on? how does it change with changes in specificity and sensitivity? |
|
Definition
prevalence -increased sensitivity, more false positives, fewer negative tests, so NPV increases -increased specificity, fewer ppl test negatively (closer to actual # of true negatives), so increased NPV |
|
|
Term
3. trade-off between sensitivity and specificity; example, if you change cut-off glucose value for screening diabetes |
|
Definition
-if cut-off raised, than fewer ppl will be called diabetic, so fewer false-positives, but more false negatives, so specificity increases and sensitivity decreases -if lower cut-off, more people will be called diabetics, so more false positives, fewer false negatives, and so greater sensitivity and lower specificity |
|
|
Term
6a. what is the # cases of a disease attributable to one risk factor (ie, the amt. by which the incidence of a condition is expected to decrease if the risk factor in question is removed)? |
|
Definition
attributable risk -ie, what % of risk is due to the risk factor? |
|
|
Term
6b. if incidence of lung cancer is 1/100 in the general population and 10/100 in smokers, what is the attributable risk of smoking causing lung ca? |
|
Definition
9/100
-assumes properly matched controls |
|
|
Term
given square, with disease + and - on top and test + and - going down, w/ a, b, then c,d going across, define the following: -sensitivity -specificity -PPV -NPV -odds ratio -relative risk -attributable risk |
|
Definition
sensitivity: a/ a+c specificity: d / b+d PPV: a / a+b NPV: d / c+d odds ratio: (a/b) / (c/d) relative risk: (a/(a+b)) / (c/(c+d)) attributable risk: (a/(a+b)) - (c/(c+d)) |
|
|
Term
8. what compares the disease risk in people exposed to a certain factor with disease risk in people who have not been exposed to the factor in question? what does it require? |
|
Definition
relative risk -ie, compares risk to no risk for disease development -for USMLE: requires calculation from prospective or experimental studies; it cannot calculated from retrospective data -so if asked to calculate relative risk from retrospective data, the answer is "cannot be calculated" or "none of the above" |
|
|
Term
9. what is a clinically significant value for relative risk? |
|
Definition
any value other than 1 is significant -like 1.5 means 1.5x more likely to develop disease if exposed to risk factor -like 0.5, means half as likely to develop disease if you are exposed to the risk factor (risk factor is protective) |
|
|
Term
|
Definition
|
|
Term
10. what attempts to estimate relative risk w/ retrospective studies (ie, case control)? |
|
Definition
|
|
Term
10b. what does odds ratio compare? what is a significant value for odds ratio? |
|
Definition
(if risk factor: disease vs. no disease) / (if no risk factor: disease vs. no disease) -any odds ratio other than 1 is significant -less than perfect way to estimate relative risk using retrospective data (as RR can only be calculated from prospective or experimental studies) |
|
|
Term
11. what do you need to know about the standard deviation for the USMLE? |
|
Definition
for a normal or bell-shaped distribution, 1 SD holds 68% of the values, 2 SD holds 95% of the values, and 3 SD holds 99.7% of the values |
|
|
Term
12. what is the average value, what is the middle value, and what is the most common value. |
|
Definition
mean median (if even # of #'s, then avg. the 2 middle #s) mode -remember, normal distribution, mean=median=mode |
|
|
Term
13. what is a skewed distribution and how does it affect the mean, median, and mode? |
|
Definition
implies that the distribution is not normal (not bell-shaped) -positive skew, asymmetric distribution with excess of high values (ie tail on the right of the curve), and the mean>median>mode -negative skew: asymmetric distribution w/ excess of low values, so tail is on the left (mean-SD and mean are less meaningful in these settings b/c not nl distribution |
|
|
Term
14. define test reliability. how is it related to precision? what reduces reliability? |
|
Definition
reliability of a test = precision -reliability measures reproducibility and consistency of a test -good inter-rater reliability, person taking test will get same score if 2 different ppl adminster same test -random error reduces reliability and precision (ie, limitation in significant figures) |
|
|
Term
15. define test validity. how is it related to accuracy? what reduces validity? |
|
Definition
validity of a test = accuracy -measures trueness of a test (ie, whether test measures what it claims to measure) -ex: give valid IQ test to a genius, test should not indicate the individual is retarded. -systematic error reduces validity and accuracy (ie, when equipment is miscalibrated) |
|
|
Term
16. define correlation coefficient. what is the range of its values? |
|
Definition
measures to what degree two variables are related -value of cc ranges from -1 to +1 |
|
|
Term
17. T or F. A correlation coefficient of -0.6 is a stronger correlation coefficient than +0.4. |
|
Definition
T. further from zero, more significant correlation coefficient. Zero correlation means no association (the two variables are totally unrelated). +1 means perfect positive correlation (so when one variable increases, other does too). - -1 is perfect negative correlation (when one variable increases, other decreases) -use the absolute value to give you the strength of the correlation (ie, -0.3 is equal to +0.3 in strength of correlation, different in type) |
|
|
Term
18. define confidence interval. why is it used? |
|
Definition
when you take data set form a subset of a population and calculate mean, you want to say that mean is the same as mean for the whole population. In fact, however, the two means are usually not equal. -CI of 95% means you are 95% confident that the mean of the general population is w/in 2 SD of your mean) -ex: if sample heart rate of 100 ppl and get a mean of 80 beats/min. with a SD of 2, 95% confidence interval is 76 |
|
|
Term
19. what 5 types of studies should you know about for step 2? |
|
Definition
highest to lowest quality and desirability: 1) experimental studies 2) prospective studies 3) retrospective studies 4) case series 5) prevalance surveys |
|
|
Term
20. what is an experimental study? |
|
Definition
gold standard! -compares 2 equal groups in which 1 variable is manipulated and its effect is measured -uses double-blinding and well-matched controls to ensure accurate data -not always possible to do this kind of study b/c of ethical concerns |
|
|
Term
21. what are prospective studies? why are they important? |
|
Definition
aka observation, longitudinal, cohort, incidence, or follow-up studies -involves choosing a sample and dividing into 2 groups based on presence or absence of a risk factor, then follow groups over time to see what diseases they develop. -can calculate relative risk and incidence from this type of study. -note: this study is time-consuming but practical for common diseases |
|
|
Term
22. what are retrospective studies? discuss their advantages and disadvantages. |
|
Definition
aka case control -choose populations after the fact, based on presence (cases) or absence (controls) of disease -information can be collected about risk factors -ex: pick ppl w/ lung cancer and w/o lung cancer, and then see who smoked more before developing lung ca -can calculate odds ratio (cannot get true relative risk or measure incidence) -compared to prospective studies, these studies are less expensive, less time-consuming, and more practical for rare diseases |
|
|
Term
23. what is a case series study? how is it used? |
|
Definition
describes clinical presentation of ppl w/ a certain disease -good for extremely rare diseases (as are retrospective studies) -study may suggest a need for retrospective or prospective study |
|
|
Term
24. what is a prevalence survey? how is it used? |
|
Definition
aka cross-sectional survey -looks at prevalence of a disease and the prevalence of risk factors -when used to compare 2 different cultures or populations, a prevalence survey may suggest a possible cause of a disease -can test hypothesis with a prospective study -ex: researchers found higher prevalence of colon cancer and higher fat diet in US vs. lower prevalence of colon cancer and lower fat diet in Japan |
|
|
Term
25. what is difference between incidence and prevalence |
|
Definition
incidence: # of new cases of disease in a unit of time (ex: 1 year) (equals absolute or total risk of developing a condition, unlike relative or attributable risk) -prevalence: total # cases of a disease (new and old) at a certain point in time |
|
|
Term
26. if a disease can be treated only to the point that people can be kept alive longer w/o being cured, what happens to incidence and prevalence of the disease? |
|
Definition
Classic Step 2 question. -Nothing happens to incidence, but prevalence increases -in short-term diseases like the flu, incidence may be higher than prevalence -in chronic diseases like DM or HTN, prevalence is greater than the incidence |
|
|
Term
|
Definition
the observed incidence greatly exceeds expected incidence |
|
|
Term
28. when do you use a chi-squared test, T-test, and analysis of variance test? |
|
Definition
all of these tests are used to compare different sets of data -Chi-squared test: compared percentages or proportions (nonnumeric or nominal data) -T-test: used to compare two means -ANOVA: compares 3 or more means |
|
|
Term
29. what is difference b/t nominal, ordinal, and continuous types of data? |
|
Definition
-nominal data: no numeric value (ex: the day of the week) -ordinal data: ranking but no quanitifcation; example, class rank, w/o specificity to how far #1 is ahead of #2 -continuous data: most numerical data is continuous (ex: weight, bp, age) -imp. distinction b/c chi-squared must be used for nominal or ordinal data, but T-test or ANOVA is used to compare continuous data |
|
|
Term
|
Definition
High-yield on Step 2. -If p<0.05 for a set of data, there is less than a 5% chance (0.05=5%) that the data were obtained by random error or chance. If p<0.01, less than 1% chance. -ex: bp in control group is 180/100 but falls to 120/70 after giving drug X, p-value <0.10 means that there was 10% chance that this change was due to random error or chance. A p<0.05 is used as the cutoff for statistical significance in medical literature. |
|
|
Term
31. what three points about p-values should be remembered for the Step 2 exam? |
|
Definition
1. a study with a p-value < 0.05 may still have serious flaws. 2. a low p-value does not imply causation. 3. a study that has statistical significance does not necessarily have clinical significance. For example, if I tell you that drug X can lower bp from 130/80 to 129/79 with p<0.0001, you still would not use drug X b/c the result is not clinically significant. |
|
|
Term
32. Explain the relationship of the p-value to the null hypothesis. |
|
Definition
null hypothesis (hypothesis of no difference). -study of HTN, null hypothesis says that the drug under investigation does not work; therefore, any difference in bp is due to random error or chance. When drug really does work, null hypothesis must be rejected; so with p<0.05, I can confidently reject the null hypothesis b/c the p-value tells me that there is less than a 5% chance that the null hypothesis is correct. -p-value represents chance of making a type I error, that is, claiming an effect or difference when none exists or rejecting the null hypothesis when it is true. If p<0.7, there is less than 7% chance you are making a type I error if you claim a true difference (not due to random chance) in bp b/t control and experimental groups. |
|
|
Term
33. what is a type II error? |
|
Definition
In a type II error the null hypothesis is accepted when in fact it is false. In the above example, it would mean that the antihypertensive drug works but the experimenter says that it does not. |
|
|
Term
34. what is the power of a study? how do you increase the power of a study? |
|
Definition
Power measures the probability of rejecting the null hypothesis when it is false (a good thing) . The best way to increase power is to increase the sample size. |
|
|
Term
35. what are confounding variables? |
|
Definition
Confounding variables are unmeasured variables that affect both the independent (manipulated, experimental) variable and dependent (outcome) variables. |
|
|
Term
36. Discuss nonrandom or nonstratified sampling. |
|
Definition
City A and City B can be compared, but they may not be equivalent. For example, if city A is a retirement community and city B is a college town, of course city A will have higher rates of mortality and heart disease if the groups are not stratified into appropriate age-specific comparisons. |
|
|
Term
37. what is nonresponse bias? |
|
Definition
occurs when ppl do not return printed surveys or answer the phone in a phone survey. If nonreponse accounts for a significant percent of results, the experiment will suffer. The first strategy in this situation is to visit or call the nonresponders repeatedly. -if this strategy is unsuccessful, list the nonresponders as unknown in the data analysis and see if any results can be salvaged. Never make up or assume responses. |
|
|
Term
38. Explain lead-time bias. |
|
Definition
Lead-time bias is due to time differentials. The classic example is a cancer screening test that claims to prolong survival compared with older survival data, when in fact the difference is due only to earlier detection- not to improved treatment or prolonged survival. |
|
|
Term
39. Explain admission rate bias. |
|
Definition
The classic administration rate bias occurs when an experimenter compares the mortality rates for MI (or some other disease) in hospitals A and B and concludes that hospital A, has a higher mortality rate. But the higher rate may be due to tougher admission criteria at hospital A, which admits only the sickest patients with MI. Hence, higher mortality rates, although their care may be superior. The same bias can apply to a surgeon's mortality and morbidity rates if he or she takes only tough cases. |
|
|
Term
|
Definition
Recall bias is a risk in all retrospective studies. When people cannot remember correctly, they may inadvertently over- or underestimate risk factors. |
|
|
Term
41. Explain interviewer bias. |
|
Definition
Interviewer bias occurs in the absence of blinding. The scientist received big money to do a study and wants to find a difference b/t cases and controls. Thus, he or she inadvertently calls the same patient comment or outcome "not significant" in the control group and "significant" in the treatment group. |
|
|
Term
42. what is unacceptability bias? |
|
Definition
unacceptibility bias occurs when ppl do not admit to embarrassing behavior or claim to exercise more than they do to please the interviewer- or they claim to take experimental medications when they spit them out. |
|
|