Term
|
Definition
• An overarching philosophical system denoting a particular ontology, epistemology, methodology, and axiology • Begins with a paradigm ⇒ moves to question ⇒ inform self of research out there, find out what other people know ⇒ identify appropriate methods and strategies ⇒ that your strategies will work, clearly described ⇒ findings are clearly stated, easy to draw conclusions based on clarity of research ⇒ Conclusions – clear connections between findings and conclusions • Every research question starts out with a specific paradigm – but might have different approaches, helps to inform what you are doing, don’t dismiss the other types of research even if you have a different research approach or paradigm, but you can only choose one paradigm approach to research • par·a·digm Noun - A typical example or pattern of something; a model. A worldview underlying the theories and methodology of a particular scientific subject |
|
|
Term
|
Definition
specifies the most fundamental categories of existence (how we label, categorize things, people and concepts), discovering categories and fitting objects into these categories in ways that make sense. Onto the next person – labeling them – what is knowable |
|
|
Term
|
Definition
what we feel we can know about things in our reality, questions like “Can we apprehend reality fully or only partially? – study of knowledge pist - feelings |
|
|
Term
|
Definition
how we go about learning these discoveries, tools and strategies, or feel most appropriate to facilitate knowledge production and construction |
|
|
Term
|
Definition
- values and ethics of research, concerned with questions like the purpose of research, its value, and the ethical conduct of research – ethical questions – honesty, true data that isn’t made up (study of value) - think if you kill someone with an ax, it’s ethically wrong |
|
|
Term
Positivist and Post Positivists paradigm- |
|
Definition
• Researchers who believe some objective reality is guided by universal laws and principles and can be known through empirical research – concrete – like math – measureable • **Teachers use grades for this. Grades, classes, factors identified as data – did it impact outcome? More objective – more quantitative - objective measures – they think this approach is more valid • Empirical research is a way of gaining knowledge by means of direct and indirect observation or experience. |
|
|
Term
Positivists Believe: Ontology- Epistemology- Methodology- Axiology- |
|
Definition
• Ontology- Objective reality that is guided by universal laws and principles • Epistemology- Reality is fully knowable – we can know everything! Even if it is imperfect • Methodology- Reality can be known through empirical research conducted through the scientific quantitative methodologies. • Axiology- Research that has intrinsic value. Experimental and generalizable research, ethics described in terms of objectivity, scientific rigor, and adherence to IRB guidelines. |
|
|
Term
Post-Positivists Believe: Ontology- Epistemology- Methodology- Axiology- |
|
Definition
• Ontology- objective reality that is both guided by universal principles but it is also dynamic • Epistemology- Reality is imperfectly knowable • Methodology- Reality can be known through empirical research done through rigorous systematic quantitative and qualitative methodologies • Axiology- Research has intrinsic value, ethics described in terms of objectivity, science rigor, adherence to IRB guidelines |
|
|
Term
Constructivist-interpretivist paradigm- |
|
Definition
• Researchers who believe that reality is relative, constructed by local, self-defined communities, and can be understood through exchange (communication), interpretation between researcher and subject • Have to understand how they are interpreting this knowledge, such as math. They do not think it is as concrete. • Socially constructed by the communities responsible for creating this reality, background knowledge, can’t assume we experience the same reality • They want to know the multiple realities that exist – between males and females, compare questions, questions are informed and made by what they see, how does a night class compare to a weekend class, • It’s based on the environment and population – focused more on localized experiences, of those directly involved, not what is on a survey, but more on the individual interview – those experiences • Questions are different from Positivist – focus on why someone would select a different major, interviews, look for common themes in interviews, less objective • Must understand the perspective to appreciate the research conducted - different |
|
|
Term
Constructivist-interpretists Believe: Ontology- Epistemology- Methodology- Axiology- |
|
Definition
• Ontology- Reality is relative and locally and socially constructed, influenced by place and time • Epistemology- Learning is transactional subjective, re-constructions of reality – it changes • Methodology- Reality comes from engagement, dialogue, conversations with members of local communities • Axiology- Transactional knowledge is intrinsically valuable (personal meaning), Ethics described in terms of authentic representation of multiple voices and perspectives |
|
|
Term
Critical-emancipatory paradigm- |
|
Definition
•Researchers who believe that reality is socially constructed •Shaped by social, political, gender, economic, ethnic values – •Can be understood by understanding and uncovering these values within a historical/theoretical framework. •Get an idea in their head – then find the research to prove it •Driven by theoretical framework – they want to prove it •Must understand the beliefs behind it |
|
|
Term
Critical-emancipatory Believe: Ontology- Epistemology- Methodology- Axiology- |
|
Definition
• Ontology- Reality is shaped by social, political, cultural, economic, racial/ethnic, and gender values developed over time • Epistemology- Knowledge is transactional, subjective, and value-mediated • Methodology- Reality is learned through dialectics and social critique (discussion and reasoning by dialogue as a method of intellectual investigation) • Axiology- knowledge leading to social transformations, equity and justice are valued. Ethics described in terms of revelation and erosion of misapprehension. |
|
|
Term
Action research paradigm- |
|
Definition
• Both paradigm and inquiry strategy • Researchers beliefs that reality is transformable and can be understood through the practitioners’ reflexive inquiry • Believe that I CAN DO IT! I can make the change • Does it lead to the improvement – improving the practice of ____ • Want to make a difference – use the research to make a difference in ones classroom • This type of research is not unique to education, nor did it start there |
|
|
Term
Action Researchers Believe: Ontology- Epistemology- Methodology- Axiology- |
|
Definition
• Ontology- Reality is contextual and transformable –nature of being • Epistemology- Tentative, ongoing, actionable, changeable based on the research • Methodology- Reality can be known through collaborative, reflexive, and triangulated inquiry • Axiology- Knowledge that can improve organizational or individual practice to enhance outcomes is valued, ethics center emancipatory learning and progressive change |
|
|
Term
|
Definition
Numbers and Statistics – experimental, treatment and control |
|
|
Term
Experimental Design is Characterized by |
|
Definition
o Control/treatment groups o Random assignment of study participants o High internal validity (causality) – shows that this intervention is what caused the result o Causality o Independent, dependent o Seek to establish causational |
|
|
Term
Quasi-experimental Design (Non-Equivalent Groups Design NEGD) |
|
Definition
o Often in education, we can’t get that “true” causality, due to the random factors o Similar to the experimental design, but lacks random assignment of study participants o Posititivists o Does not include random assignment |
|
|
Term
|
Definition
o Identifies the significance of relationships between two variables (family engagement), ay hint at, but cannot establish causality. o You can survey families, for example, and then have an outcome, like their grades. Then you can see if there is a link between family engagement and higher grades, o Positive - Family Engaged – YES, Grades – GOOD o Negative –Family Engaged – YES, Grades – BAD o Or No correlation |
|
|
Term
|
Definition
o Can take a qualitative or quantitative approach or a combination of both • When quantitative – uses numbers, or numerical data to characterize/describe individuals, organizations, and/or conditions o Howard County Common Core – look at 3rd grade vs. 5th grade – see if there is a difference of its implementation between grades o Describing the phenomenon |
|
|
Term
Casual Comparative Design - ex post facto research (ex post facto – means after the fact) |
|
Definition
o Examines differences between two or more groups on a given phenomenon o Differs from experimental studies because the researcher is not causing/conducting an intervention o Looking at the differences between groups after the fact, can’t exactly say correlational, but you can see an implied cause and effect o Different from experimental design even though both designs seek to determine cause and effect, group comparisons, use statistical analyses and vocabulary to discuss results. o The difference from experimental design is that unlike experimental design, the comparative design researcher no manipulation of conditions sine presumed cause, lung cancer, has already occurred before study was conducted on effects of smoking. o EXAMPLE Smoking 16 year olds– one group no, one group forced to – not ethical – no consent, intentionally causes harm –but what you can do is look at willing participants who do it already by looking the medical records – can’t do experimental design on this, but you can look at those who meet the criteria already – too large to develop – leads to higher risk/likelihood o Validity – the larger the sample, the more you can generalizable |
|
|
Term
Quantitative Research – non-experimental |
|
Definition
no direct control over causation |
|
|
Term
|
Definition
interpret things within a natural setting o Seeks to interpret and explain social processes, relationships, and outcomes by investigating in natural settings (classrooms, not labs) o Also characterized by multiple designs – • Case studies -3 teachers to follow • Ethnography -from anthropology –trying to have a culture – might not be aware of the culture that you’re in – don’t always know - understand it from a cultural perspective – much longer in the field – like 1 – 3 years • Grounded theory - developing a theory from the ground up – not taking a theory and proving it – immersion • Phenomenological – trying to understand it from that person • Oral History – try to understand their life – leadership from when they were kids – how did they learn? Trying to understand where the person is now based on past |
|
|
Term
|
Definition
balance for when using both, requires team of researchers o Combines methods and studies – quant. and qual. o Includes explanatory, exploratory, and triangulation designs o Uses numbers to interpret o Qualitative study, but might have a survey o No hardcore statistical or causality o In the design itself – for different reasons collecting both o TRUE mixed methods are rare |
|
|
Term
|
Definition
o A process where people/participants examine their own educational practice through systematic inquiry o Different designs include classroom-based (by individual teacher), collaborative, school-wide, and district-wide studies |
|
|
Term
|
Definition
o The nature of the research question determines the most suitable design o Central research questions often begin with “what,” “how,” and sometimes “why” o Effective research questions are • Significant – interesting to yourself and others • Researchable – through empirical investigation - within constraints, like funding or time, access to participants • Manageable – not too general, but not to specific, can be limited by setting and participants – not too big – logical sequencing – affects quality of research o Research Questions and Hypotheses – • Some Research questions lend themselves to hypotheses • Hypothesis – tentative prediction (not qualitative) • Null Hypothesis – states that no statistically significant relationship exists • Statistical significance is described in terms of probability – ex: less than a .05 chance (p. 35 in book) – beyond a random chance |
|
|
Term
|
Definition
key factor examined - labels properties in a study that can take on different values or amounts o Categorical – variables that represent a mutually exclusive discrete group, like gender, race, education level – won’t change o Continuous – variables that aren’t discrete unless made so – but can have any value between its minimum and max values – age, weight, score, 0-5, 6-10 |
|
|
Term
|
Definition
variables that are controlled by researcher “I” change • 1st EX: the groups given – strawberry, blueberry or spinach as a supplement • 2nd EX: Women given beta-carotene or placebo • 3rd EX: Brake lights – changing the brightness/dimness |
|
|
Term
|
Definition
response variable affected by independent variable – outcome – Does Not Change • 1st EX: Memory and Motor skills affected by eating blueberries – response to it, memory, performance on the test • 2nd EX: Women’s cancer rates as a result from beta carotene • 3rd EX: Brightness of brake lights affecting stopping time |
|
|
Term
Confounding variables - controlled variables |
|
Definition
factors that interact with independent variables to affect the dependent variables |
|
|
Term
|
Definition
conditions or factors that are often outside our control (fire drill) that can control the implementation and outcomes of a study |
|
|
Term
|
Definition
informed relationship between variables |
|
|
Term
|
Definition
Summarizes the paper, the question, purpose – short paragraph |
|
|
Term
|
Definition
rationale for why study was performed – 1-2 paragraphs, wrote previous research, general topic, citing leading researchers |
|
|
Term
Review of Literature (Literature Review) |
|
Definition
has a heading, what research has been done that you are building on (include APA citation) |
|
|
Term
|
Definition
begins with research question and hypothesis |
|
|
Term
|
Definition
data table – quantitative, in the write up - explain what is in the table/chart – describe it o If Peer Reviewed- reviewer would make sure what is in the table matches |
|
|
Term
Discussion, then Conclusion |
|
Definition
Discuss findings, part of it – say more research needs to be done, it encourages more funding, did your research raise more questions, understand limitations of your study |
|
|
Term
|
Definition
EVERY AUTHOR put in the reference section, cite in the text. Parenthetical or discussion of their empirical study to describe and summarize it |
|
|
Term
|
Definition
• title • abstract • introduction (rationale for why study was performed) • literature review (has a heading) – what research has been done that you are building on (includes apa citations) • methods (begins with research question and hypothesis, tell show you conducted study, participants) • results – data, charts • discuss findings (usually says more research needs to be done) • references (every author in the reference section should becited somewhere in the text) |
|
|
Term
|
Definition
• Abstract • Intro • Literature review • Methods (mostly words instead of data, tables would have words instead of numbers, participants) • Results • Discussions • references |
|
|
Term
When note-taking – include – |
|
Definition
o Reference Info • Author, professional title, and position • Title, source, page numbers - pp o Research Approach – quant. or qual., mixed methods o Research Design – experimental, correlational, case study, quasi-experimental o Methods – participants, settings, data collection strategies and instruments o Summary or list of key findings o General impressions of strengths and weaknesses |
|
|
Term
IN Quantitative Research - • Validity is Four types are |
|
Definition
redibility and trustworthiness of a research study, valid studies are a reliable reflection of reality, in QUANT research – there are 4 different types of validity - Construct Internal External Statistical |
|
|
Term
1. Statistical Validity – |
|
Definition
– the degree to which an observed result can be relied upon and not attributed to random error in sampling or in measurement. IS THERE A RELATIONSHIP? • Strengthened by: Using well validated measures, appropriate sampling procedures |
|
|
Term
|
Definition
refers to the confidence that we can place in the cause and effect relationship in a study Causality i. Strengthened by: Using experimental designs with adequate controls to reduce or eliminate confounding variables
THERE ARE MANY THREATS! |
|
|
Term
|
Definition
Historical Maturation Testing and Retesting Instrumentation Statistical Regression Experimental Mortality (attrition) Diffusion Subject Effects - demoralization, rivalry Experimental Effects Ethical Principles Sampling |
|
|
Term
|
Definition
anything that happens or events – you might not think of it until after the effect (not something YOU did, but like a snow storm and a week off from school) (plausibly affecting the study’s outcomes) may occur during the course of the experiment |
|
|
Term
|
Definition
of the subjects/participants that may affect the dependent variable – length of time between studies, people change |
|
|
Term
c. Testing and Retesting – |
|
Definition
can influence awareness of variables or behavior (Hawthorne Effect) – makes students think about their own involvement – triggered motivation of participants |
|
|
Term
|
Definition
measurement methods or procedures may not be equivalent |
|
|
Term
e. Statistical regression – |
|
Definition
of subjects starting out in extreme positions (regression to the mean) – second time you take a test, less extreme results |
|
|
Term
f. Experimental mortality |
|
Definition
subjects drop out of the study before it is finished (attrition) your population can change or move away |
|
|
Term
|
Definition
those who get the stimulus spread it to controls – changes if the kids have been talking, it’s hard to contain the information, since they talk |
|
|
Term
|
Definition
subject changes in behavior in response to the research i. Demoralization – subjects in control group find out they’re not the special group, lose interest in study, stop trying, bad effort ii. Rivalry – controls change behavior to try to beat the experimental group |
|
|
Term
i. Experimental Effects – |
|
Definition
refers to deliberate and unintentional influence experimenter has on the subjects (tone of voice, reinforcement of different behaviors) |
|
|
Term
|
Definition
also can introduce threats to validity because some people will refuse to participate, a small price to pay to maintain ethical standards and protect research participants |
|
|
Term
|
Definition
– selecting inappropriate participants or inadequate sample sizes can introduce threats to validity |
|
|
Term
|
Definition
refers to whether the operational definition of a variable actually reflects true theoretical meaning of a concept i. Strengthened by: Using well-validated and investigated constructs or measures |
|
|
Term
|
Definition
refers to the generalizablility of the study findings – to other people, and locations i. Strengthened by: Using or gathering a representative sample (if possible), clearly describing the sample, so that readers know the limits of generalization |
|
|
Term
|
Definition
the larger group, total group to which results can be compared |
|
|
Term
|
Definition
group of individuals from whom data are collected |
|
|
Term
|
Definition
o A sample is a set of units observed from the all possible units o Desire in taking a sample is to learn about a larger group, the population o Sampling frame is the set of units the researcher will take the sample from o Ideally the sampling frame is the same as the population of interest (in reality – this is not often possible) o Sampling design can aid in obtaining a representative sample of the population. That is a population of interest |
|
|
Term
|
Definition
o The chance and random variation in variables that occurs when any sample is selected from the population o Some Sampling error is to be expected o To control sampling error – researchers use various probability sampling designs or methods –this is important part of research designs in quantitative |
|
|
Term
• Probability Sampling Designs – |
|
Definition
o Subjects are drawn from larger population in such a way that the probability of selecting each member is known, each method involves some form of random sampling |
|
|
Term
|
Definition
each member of the population has equally likely probability of being chosen |
|
|
Term
o SRS - Simple Random sampling – |
|
Definition
should look like the population at large – a sample in which all units in the sampling frame (group) have equal probability of selection – chosen out of a hat, bingo, computer generated |
|
|
Term
|
Definition
taking every 3rd (nth) kid in your sampling frame or survey population (sources of doing this – phone book, voter registration, membership list |
|
|
Term
• Reliability Coefficient – |
|
Definition
correlation coefficient that is used as an index of reliability (CRONBACH ALPHA- ) |
|
|
Term
o Stratified Random Sampling |
|
Definition
the population is separated in subgroups or strata (like gender, age, location – a/s/l) and from each group - simple random or systematic sample is taken -- number of subjects drawn from each group can be proportional or non-proportional to the total population – take proportion from within group |
|
|
Term
|
Definition
sampling method wherein the population is divided into groups already clustered in certain areas or time, a sample is taken from each group by simple random or systematic random ---take random sample within specific group – used when researcher can’t obtain a complete list of all members of population, but can identify groups |
|
|
Term
• Nonrandom (non-probability) sampling designs – quantitative |
|
Definition
we may or may not represent the population well and it is often hard for us to know how well we’ve done this. Researchers prefer probabilistic or random sampling methods over non-probabilistic ones, and consider them to be more accurate and rigorous p. 140 |
|
|
Term
|
Definition
available - not generalizable – convenient – usually volunteers, yielding data from Columbia Mall shoppers, used widely in both qual. and quan. Studies |
|
|
Term
o Purposive - Purposeful sampling |
|
Definition
researcher selects a non-representative subset of some larger population and is constructed to serve a very specific need or purpose, like a study of female superintendents, emphasis on relying on the researcher’s judgment to choose within a population |
|
|
Term
|
Definition
when a researcher gathers data from a targeted number of non-randomly selected individuals possessing identified characteristics – position, ethnicity, gender, done to ensure the inclusion of a particular segment, group of the population, proportions may or may not differ dramatically from the actual population proportions |
|
|
Term
• Additional Sampling Issues |
|
Definition
o Sampling Size o Subject Motivation o Sampling Bias o Margin of Error |
|
|
Term
|
Definition
used to report likely values for the population |
|
|
Term
|
Definition
researcher consciously or unconsciously selects subjects that skew the findings, there could be motive for this – such as brands and products |
|
|
Term
|
Definition
– honesty of subject responses – extent to which subjects are motivated, affects results - because they are volunteers |
|
|
Term
|
Definition
sufficient size for credible results – to determine sample size, use 1. Published tables/calculators that use formulas, or 2. Rules of thumb or general guidelines |
|
|
Term
What is Validity? How does one know if it is valid? |
|
Definition
Statistical validity - Question the sampling or the measurement Variables – proper measure – construct validity Cause and Effect relationship – internal External – generalizability |
|
|
Term
Is Probability sampling always necessary in credible qualitative research? |
|
Definition
NO b/c – don’t need big population, as long as the subjects don’t lend to over generalizability, when study calls for specificity, don’t leave up to chance for subjects – might work really well for one group of kids, but can’t say it will work for all groups of kids, since you didn’t study them |
|
|
Term
Do experimental research designs always have high internal validity?: |
|
Definition
NO. THREATS AFFECT: Rivalry – competing with other groups Historical – events like earthquakes Getting sick Experimental Mortality – moving away |
|
|
Term
For action research, would you use probability or non-probability sampling? Specifically: |
|
Definition
Non-probability – Purposeful – if you are using action research, you are trying to make a difference with a set group of kids |
|
|
Term
|
Definition
Statistical Construct Internal External |
|
|
Term
|
Definition
how well an observed result can be relied upon and not attributed to random error in sampling or in measurement |
|
|
Term
|
Definition
Can the operational definition of a variable actually reflect the true theoretical meaning of a concept? Fair measure of what you are testing – like MSA – does it really measure what it says it does? o Strengthened by: using well-validated and investigated constructs or measures (ones others have used and have been validated) |
|
|
Term
o Types of Construct Validity |
|
Definition
• Translation Validity • Content - translation • Criterion-related Validity • Predictive Validity• Discriminant Validity – • Convergent Validity – the • Concurrent Validity – |
|
|
Term
• Discriminant Validity – |
|
Definition
we examine the degree to which the operationalization is not similar to (diverges from) other operationalizations that they should be different from, helps establish construct validity by demonstrating that the construct you are interested in, like optimism, might be present in your study – like inexperienced teachers who are optimistic or just optimistic – TAKE THE VALUE FOR X BASED ON Y? SHOULD DEPEND ON WHAT YOU ARE MEASURING – MSA AND ALT MSA |
|
|
Term
|
Definition
the degree to which the operationalization is similar – converges on – other operationalizations that it theoretically should be similar to – LIKE to show the convergent validity of a test of reasoning ability, scores of test can be correlated with scores on other tests that are also designed to measure reasoning ability HOW SIMILAR IT IS TO ANOTHER TEST |
|
|
Term
|
Definition
we assess the operationalization’s ability to distinguish between groups that it should theoretically be able to distinguish between. LIKE if we come up with a way of assessing manic-depression, our measure should be able to distinguish between people who are diagnosed with manic-depression and those with paranoid-schizophrenia --- UM YOU CAN TELL THE DIFFERENCE |
|
|
Term
|
Definition
we assess the operationalization’s ability to distinguish between groups that it should theoretically be able to distinguish between. LIKE if we come up with a way of assessing manic-depression, our measure should be able to distinguish between people who are diagnosed with manic-depression and those with paranoid-schizophrenia --- UM YOU CAN TELL THE DIFFERENCE |
|
|
Term
|
Definition
we assess the operationalization’s ability to predict something it should theoretically be able to predict. LIKE a measure of math ability should be able to predict how well a person will do in an engineering-based college major - EX: Does the GT test really should who will do well in the GT class? |
|
|
Term
|
Definition
check the content (operationalization) of the item against established content in the field to ensure correspondence. Content validity is often addressed in academic and vocational testing, where test items need to reflect the knowledge actually required for a given topic area, like history, or job skill, accounting |
|
|
Term
|
Definition
how you measure the concept – so it’s clear • Optimism - My lessons are always effective: • Agree • Disagree |
|
|
Term
• Translation Validity • Face Value/Validity- translation – |
|
Definition
– look at the operationalization and see whether on its face it seems like a good translation of the construct. This considered a relatively weak way to demonstrate construct validity – o EX: Like looking at a GT test and seeing a low score, not GT o EX: Measure Optimism – disposition, might be measures of optimism from previous studies, but cutting edge research – must operationalize – how you measure the concept – so it’s clear • Optimism - My lessons are always effective: • Agree • Disagree |
|
|
Term
|
Definition
refers to how generalizable the study’s findings are |
|
|
Term
|
Definition
refers to the confidence that we can place in the cause and effect relationship in a study o Strengthened by: CAUSALITY o Experimental Design has the highest internal validity |
|
|
Term
• Validity VS Reliability – |
|
Definition
o Reliability - consistency or stability of a measure or test. o Validity – accuracy of the inferences or interpretations you make from the test scores. –accuracy of inferences you can make from the reliability – measures what it is supposed to measure o Reliability is a necessary (but not sufficient) condition for validity. You can have reliability without validity, but I order to obtain validity you must have reliability |
|
|
Term
• Validity VS Reliability – Test Construct |
|
Definition
o Test Construct Validity – measurement validity, the appropriateness of inferences from the numerical scores are appropriate, meaningful, and useful. Researcher must describe validity in relation to the context in which the data is collected - measures what it is supposed to measure o Test Construct Reliability – reliability of scores, consistency of measurement, is the data accurate or could there be human error? |
|
|
Term
|
Definition
o Test Group Reliability – consistency of a group’s scores over time o Equivalent-Forms – consistency of a group’s scores on 2 equivalent tests o Internal Consistency – consistency of items in measuring a single construct • Split half – measure the correlation on 2 halves of a test taken by a single group of participants • Coefficient Alpha – inter-item correlation o Inter-rater (scorer) – consistency of agreement between raters |
|
|
Term
o Test Group Reliability – |
|
Definition
consistency of a group’s scores over time |
|
|
Term
o Equivalent-Forms Reliability– |
|
Definition
consistency of a group’s scores on 2 equivalent tests |
|
|
Term
o Internal Consistency Reliability - |
|
Definition
consistency of items in measuring a single construct • Split half – measure the correlation on 2 halves of a test taken by a single group of participants • Coefficient Alpha – inter-item correlation |
|
|
Term
o Inter-rater (scorer) Reliability– |
|
Definition
consistency of agreement between raters |
|
|
Term
• Quantitative Measurement Instruments – |
|
Definition
standardized tests, surveys, interviews, |
|
|
Term
|
Definition
• Norm-referenced tests – NRTs compare a person’s score against the group’s score (the norming group) - IQ test, SAT, ACT - percentiles • Criterion-referenced – CRTs are intended to measure how well a person has learned a specific skills/knowledge - MSA – testing what has been taught • Norm vs. criterion – norm focuses on how child compares to others in group, but criterion focuses on whether the kid answered the questions correctly |
|
|
Term
o Surveys – 2 categories, and a mix: |
|
Definition
• Questionnaires – usually paper-and-pencil instruments that the respondent completes • Structured interviews – are completed by the interviewer based on what the respondent says • Both are closed-ended in quantitative studies • CLOSED-ENDED Questions – when your answers are limited to a fixed set of responses - Most scales are closed ended. (Set answer – not fill in the blank or selected response) o Yes/No o Multiple Choice o Scaled Questions – responses scored on a continuum – Likert Scale (strongly agree – strongly disagree), Rank-Order Scale |
|
|
Term
o Quantitative Observations |
|
Definition
• Non-participant observation used to document behaviors in a natural setting with a specified schedule • Duration – how long behavior is observed • Frequency – how often behavior occurs • Interval –behaviors occurring in a short period of time • Continuous – behaviors occurring over extended period of time • Time Sampling – observations of particular behaviors made at random or specified following a specified schedule |
|
|
Term
|
Definition
• Methods – larger scale – surveys, tests, etc • Data – quantitative NUMBERS • Analysis - statistical • Non-experimental |
|
|
Term
4 Types of Quantitative Design - • Non-experimental Designs – low in internal validity, higher in external validity |
|
Definition
Descriptive, correlational, comparative, secondary data analysis |
|
|
Term
|
Definition
• Identifying characteristics as a phenomenon, • Describe the variable under investigation • Do not examine relationships among variables found in a survey, for example • Not a relationship, just describing what they found – find out interests and responding to interesting, use to create policies, guides PD sessions, etc |
|
|
Term
|
Definition
• Involves collecting 2 or more variables on 1 group • Casual comparative research involves the collection of data on one independent variable (that has already occurred) for 2 or more groups • Consider studies on the effects of cigarette smoking – can’t give people cigs. To – can’t give people cigarettes to smoke but ask questions about their habits • Ex-post-facto – after the fact |
|
|
Term
|
Definition
• Examine relationships among variables under investigation • Tend to examine relationships as they exist • Do not isolate and manipulate variables to establish casual relationships as in experimental research • These studies use at least two points of data scores • Cannot make casual statements with correlational research because the direction of the effects cannot be determined • Observing effects • EX – stress and life expectancy |
|
|
Term
o Secondary Data Analysis – |
|
Definition
• Use of data (longitudinal not cross-sectional) that was collected by someone else (federal and local research organizations) • Longitudinal – collecting quantitative data over time (long time) • Cross-sectional – assessment of different groups at one time • Secondary Data – already collected and in a database • Secondary Data Analysis- statistically analyzing secondary data • Used for large samples, saves time, cost effective, good data quality • NAEP – National Association for Educational Progress – from NCES – get representative samples from nations – focus on key grade levels • Researcher poses questions that are analyzed using data sets that they were not involved in collecting • http://nces.ed.gov/surveys/ |
|
|
Term
|
Definition
high in internal validity, lower external validity |
|
|
Term
|
Definition
variable is systematically manipulated (INDEPENDENT VARIABLE– IV) to observe effect of manipulation on another variable (DEPENDENT VARIABLE – DV) Randomized assignment of participants |
|
|
Term
|
Definition
– subjects who did not receive the targeted intervention, treatment, left alone |
|
|
Term
o Experimental – Randomized grouping type experiment |
|
Definition
• Common experimental design – pre-test/post-test control group design • Treatment group vs. control group – look at before, if exposed, look at after • If change from pre- to post- for treatment group and not control group (or not as much) change can be attributed to the treatment (assuming proper experimental controls) • Post-test only control design – look at treatment group vs. control – look at measure after, exposed vs. not exposed – threatened |
|
|
Term
o Quasi-Experimental – Naturally occurring grouping – not random |
|
Definition
• Not true experiments, because the groups tend to be naturally occurring not groups created by the researcher • Give pretest, posttest • The only difference is that the participants are not random – although not “true experiments,” the researcher has a lot of control over sources of invalidity, and stronger than pre-experimental designs • Try to match participants and control by confounding variables as much as possible • Common quasi-experimental design – pretest/posttest non-equivalent group design • Treatment group vs. control group -Measure before, if exposed, measure after • Happens often in education – think of interests – like adventure learning, those kids were naturally more motivated, so they signed up |
|
|
Term
o Pre-experimental Designs- |
|
Definition
low in internal and external validity, inexpensive to conduct, effective exploratory studies. WORST – least valid – Adventure Learning |
|
|
Term
• Single Group Posttest-only design – |
|
Definition
researcher gives a treatment and then measures the outcome of interest, assuming any changes are due to the intervention, but there is no baseline to compare it to • THREATENED |
|
|
Term
• Single Group Pretest AND Posttest-only design – |
|
Definition
single case observed at two time points, not one, - before and after. Changes in outcome of interest are presumed to be the result of the intervention or treatment. No control or comparison group is employed, no control group to compare it to, but you can compare the baseline pretest to posttest |
|
|
Term
Pre-experimental designs - advantages and disadvantages |
|
Definition
• Advantages – as exploratory approaches, this approach can be cost-effective way to discern whether a potential explanation is worthy of further investigation • Disadvantages – often difficult or impossible to rule out alternative explanations. The nearly insurmountable threats to their validity are clearly the most important disadvantage of pre-experimental designs |
|
|