Term
|
Definition
The derivation of general ideas from specific observations - not used by scientists |
|
|
Term
Hypothetico-deductive reasoning |
|
Definition
Observations lead to plausible hypotheses, which we then attempt to falsify, if we cannot prove them false, they are good hypotheses, but not necessarily right |
|
|
Term
|
Definition
A general set of ideas or rules used to explain a group of observations |
|
|
Term
|
Definition
|
|
Term
|
Definition
A change in the way we think about a subject |
|
|
Term
|
Definition
H0, The form of a hypothesis that we formally test, it predicts nothing will happen |
|
|
Term
|
Definition
A specific prediction about an experiment |
|
|
Term
|
Definition
Data in categories with names |
|
|
Term
|
Definition
Data that always rises in integers |
|
|
Term
|
Definition
Non-quantitative ranked data, normally used in questionnaires |
|
|
Term
|
Definition
Quantitative measurements on a continuous scale |
|
|
Term
|
Definition
Measures calculated from a data set which summarise some characteristics of the data |
|
|
Term
Measures of central tendancy |
|
Definition
|
|
Term
|
Definition
A graph showing the total number of quantitative observations in each of a series of numerically ordered categories |
|
|
Term
|
Definition
|
|
Term
|
Definition
Total of all the squared deviates in a data set, squaring removes the minus, SS shows the magnitude of the variability but not the direction |
|
|
Term
|
Definition
s2- the average size of the squared deviates in a sample - an estimate of the population variance |
|
|
Term
|
Definition
s - the average size of deviates in a data set. |
|
|
Term
|
Definition
All individuals in a group |
|
|
Term
|
Definition
A sub-set of a population, meant to represent it |
|
|
Term
|
Definition
Bell-shaped, Gaussian, 68.5 of all data points are in one SD |
|
|
Term
Standard error of the mean |
|
Definition
A measure of the confidence we have in our sample mean as an estimate of the real mean |
|
|
Term
|
Definition
If skewed to the right, there is a long tail to the right, atc. for left |
|
|
Term
|
Definition
Tests which make many assumptions |
|
|
Term
|
Definition
Tests which make fewer assumptions |
|
|
Term
|
Definition
A distribution where a maximum possible count is far above the mean, resulting in a skew |
|
|
Term
|
Definition
A distribution where the maximum count is close to the mean |
|
|
Term
|
Definition
Used for visualising differences |
|
|
Term
|
Definition
Used for visualising trends |
|
|
Term
|
Definition
A measurement is not precise ifthere is an unbiased measurement error |
|
|
Term
|
Definition
A measurement is accurate if it is free from bias, bias occurs when there is a systematic error in your measurements |
|
|
Term
|
Definition
A confounding effect is something that influences your results in a way that can be confused with the effect you are studying |
|
|
Term
|
Definition
Effects of a variable are only visible once above a certain point |
|
|
Term
|
Definition
Effects of a variable are only visible below a certain point |
|
|
Term
Independent samples t-test |
|
Definition
A statistical test designed to test for a difference between the means of two samples of continuous data |
|
|
Term
|
Definition
The rejection of the null hypothesis when it is true |
|
|
Term
|
Definition
The failure to reject the null hypothesis when it is false |
|
|
Term
|
Definition
the use of non-independant data points as if the were independant |
|
|
Term
|
Definition
A test designed were samples are not independant of each other, normally used to examine change |
|
|
Term
|
Definition
If the variance is homogenous, it is the same in each sample |
|
|
Term
|
Definition
A test which is used to examine differences between observed and expected counts |
|
|
Term
Pearsons correlation coefficient |
|
Definition
The statistic used to test the significance of correlations between two variables. Can only be used with linear relationships and normal distributions |
|
|
Term
Spearmans rank correlation coefficient |
|
Definition
Non-parametric correlation test |
|
|
Term
|
Definition
Tests the null hypothesis that the samples means are not different |
|
|
Term
|
Definition
Non-parametric one way ANOVA |
|
|
Term
|
Definition
Combines anova and regression |
|
|
Term
|
Definition
Clear, Precise, Plausible, Able to produce testable predictions |
|
|
Term
|
Definition
|
|
Term
|
Definition
|
|
Term
|
Definition
|
|
Term
|
Definition
i=n Thesumof:(Xi-Xwithalineoverit)2quared i=n |
|
|
Term
|
Definition
i=n s2=thesumof(xiXwithalineoverit)2quared i=1 _____________________________________ n-1 |
|
|
Term
|
Definition
|
|
Term
95% of samples are within |
|
Definition
|
|
Term
Standard error of mean formula |
|
Definition
|
|
Term
Which tests have more statistical power? |
|
Definition
|
|
Term
|
Definition
|
|
Term
Parametric test standard assumptions |
|
Definition
Independance Homogenity of variance |
|
|
Term
Alternative t test if the variances are not the same |
|
Definition
|
|
Term
To test if the variances are the same? |
|
Definition
|
|
Term
If data is not normal, you can transform it by... |
|
Definition
Squaring all the points, eliminating a right skew square root arcsine all the points, eliminating a left skew |
|
|
Term
Alternative T-test if the data is not normal |
|
Definition
|
|
Term
|
Definition
adjust for chance of a type 1 error |
|
|
Term
In regression, we analyse |
|
Definition
the affect of a variable on another variable |
|
|
Term
If events A and B are mutually inclusive, the probability of event A or B is P(A or B) = |
|
Definition
The sum of A and B P(A) + P(B) |
|
|
Term
The sum of the probability of A happening and not happening is |
|
Definition
|
|
Term
If the two events are independent, the probability of A and B is |
|
Definition
The product of the two probabilities P(A and B) = P(A).P(B) |
|
|
Term
Binomial probability distribution |
|
Definition
n! x p^i x (1-p)^n-i -------------------------- i! x (n - i)! Where the probability of the first outcome is p, the number of events i, and the amount of trials n |
|
|
Term
As the binomial distribution gets bigger, we expect to see |
|
Definition
|
|
Term
|
Definition
m^i e^-m _-------- = P(i) i! |
|
|
Term
|
Definition
Put ! after a number means x all numbers up to that, e.g. 3! is 1x2x3 |
|
|
Term
The binomial probability distribution can be used to |
|
Definition
figure out the probability of a certain size deviation from an expectation |
|
|
Term
To convert non-normal data to ordinal |
|
Definition
|
|
Term
|
Definition
|
|
Term
|
Definition
the sum of an infinite series |
|
|
Term
|
Definition
the number by which ten has to be raised before it is equal to x |
|
|
Term
|
Definition
Z = (x1 - x2) / √(((s1^2)/n1)+(s2^2)/n2)) where z is the test statistic, n1 is the first sample size and n2 is the second, x1 is the first mean and x2 is the second, and s1 is the first standard deviation and s2 is the second |
|
|
Term
To convert non-normal data to normal |
|
Definition
|
|
Term
|
Definition
binomial, poisson, chi squared |
|
|
Term
|
Definition
Kolmogorov-Smirnov, less powerful, more general, or shapiro-wilk |
|
|
Term
Continuous data tests of difference |
|
Definition
Not normal - Mann-whitney for two treatments Kruskal-wallis for more than two Wilcoxon for paired Normal - t-test, paired or not, ANOVA or two way ANOVA |
|
|
Term
Continuous data tests of trends |
|
Definition
Normal - Spearmans rank Non normal - Pearsons or regression |
|
|