Term
What are the distinctions among education, curriculum and instruction? |
|
Definition
-Education is the process of helping to change students knowledge and behaviour in desired ways
- A curriculum describes the skills, performances, knowledge and attitudes students are expected to learn in school
-Instruction is the set of methods and processes used to contribute to learning and change in students behaviour |
|
|
Term
What is the difference between achievement and aptitude or ability? |
|
Definition
Achievement refers to school-based learning whole ability
Aptitude refers to the capacity to learn a skill or accomplish a level of complexity |
|
|
Term
Describe the three components of the instructional process |
|
Definition
Planning instruction includes identifying specific expectations, objectives or leaning outcomes, selecting materials to foster these outcomes, and organizing learning experiences into a coherent, reinforcing sequence.
The teacher considers the characteristics of students and the resources and materials available to help support learning.
Delivering the planned instruction to students includes providing them with appropriate learning experiences. |
|
|
Term
Describe the areas to consider when planning instruction |
|
Definition
When planning instruction consider:
- Student characteristics prior knowledge, prerequisites kils and knowledge, work habits, socialization, special learning needs, learning styles, cultural/language differences, exceptionalities)
- Teacher characteristics (content knowledge, instructional method preferences, assessment preferences, physical limitations, prior experiences, attitudes and beliefs)
- Instructional resources (ministry/department curriculum outcomes, textbook materials, technology, collegial and administrative support, space, educational assistants, equipment) |
|
|
Term
What are the three general levels of learning objectives? |
|
Definition
The three general levels of learning objectives are global objectives, educational objectives and instructional objectives |
|
|
Term
What are the three domains of human behaviour for which educational outcomes can be written? |
|
Definition
-The cognitive domain includes intellectual activities such as memorizing, interpreting, applying, problem solving, reasoning, analyzing, and thinking critically
- The affective domain involves feelings, attitudes, interests, preferences, values and emotions
- The psychomotor domain includes physical and manipulative activities |
|
|
Term
What is Blooms Taxonomy and how does it compare to the Revised Bloom's Taxonomy? |
|
Definition
- Blooms Taxonomy isa system of cognitive processes organized into six levels, with each successive level representing a more complex type of cognitive process.
- Starting with the simplest and moving to the most complex, the six cognitive taxonomic processes are knowledge, comprehension, application, analysis, synthesis and evaluation.
- Recently, the unidimensional nature ofBloom's Taxonomy was revised and cognitive process was reconfigured to remember, understand, apply, analyze, evaluate, create. |
|
|
Term
What are some essential elements of effective learning outcomes? |
|
Definition
Effective outcomes:
- describe student behaviour that results from instruction
- Represent important content of a lesson, unit or curriculum
- centre on a verb that specifies student performance
- can be fulfilled in a reasonable amount of time
- state the behaviour in terms that can be observed, assessed and evaluated |
|
|
Term
What are some important guidelines to follow when planning instruction? |
|
Definition
- Perform complete before-instruction or diagnostic assessments of students characteristics and needs.
-Use before-instruction or diagnostic assessment information when planning.
-Do not rely entirely and uncritically on textbooks and teachers' manuals when planning.
- Include a combination of lower-level and higher-level outcomes.
- Include a wide range of instructional activities and strategies to fit the students instructional needs.
- Map educational outcomes with teaching strategies and planned assessments.
- Recognize ones own knowledge and pedagogical limitations and planned preferences.
- Include assessment strategies in instructional plans. |
|
|
Term
What are the distinctions among the terms assessment, evaluation, measurement and test? |
|
Definition
- Assessment is the process of collecting, synthesizing, and interpreting information about students and instruction to aid in classroom decision-making.
- Evaluation is the process of using assessment data to make judgments about the level of student learning, interrest, and/or attitude, and about the quality of classroom instruction.
-Measurement is the process of collecting data on student performance using an instrument that is associate with a numerical scale.
- A test is a formal, systematic, often paper and pencil procedure used to gather information about student's performance. A test is just one tool for gathering assessment information. |
|
|
Term
What is the difference between a true score and an observed score? |
|
Definition
-True score is a precise measure of a student's level of ability.
- Observed score is the score that each student gets on a test. |
|
|
Term
What are the purpose for classroom assessment? |
|
Definition
- Assessment for learning provides teachers with information to modify instruction in order to differentiate student learning.
- Assessment as learning emphasizes using assessment as a process of developing and supporting students' metacognition.
- Assessment of learning confirms what students know, determines whether they have met standards, and/or shows their placement relative to others. |
|
|
Term
What are the differences among diagnostic, formative and summative assessment? |
|
Definition
- Assessment for learning provides teachers with information to modify instruction in order to differentiate student learning
- Assessment as learning emphasizes using assessment as a process of developing and supporting student's metacognition
-Assessment of learning confirms what students know, determines whether they have met standards, and/or shows their placement relative to others |
|
|
Term
What are the time frame differences among diagnostic, formative and summative assessment? |
|
Definition
- Diagnostic assessment takes place at the begining of a unit of instruction and provides teachers with information about their students' academic, social, attitudinal and behavioural characteristics
- Formative assessment occurs while instruction is taking place and includes observations and feedback intended to alter and improve student's learning and teacher's instructional practice
- Summative assessment occurs at the end of instruction and gives information about student's learning |
|
|
Term
What is the difference between performance and authentic assessment? |
|
Definition
- Performance assessment occurs while students are actively engaged in a task
- Authentic assessment involves the assessment of tasks within the classroom that replicate those undertaken in real-world contexts. Some authentic assessment tasks employ performance assessment; others do not. |
|
|
Term
What are the three primary techniques by which teachers gather assessment data? |
|
Definition
The three primary methods by which teachers gather assessment data are paper-and-pencil techniques, observation techniques, and oral questioning techniques |
|
|
Term
What is the difference between standardized and non-standardized assessments? |
|
Definition
- Standardized assessments are administered, scored and interpreted in the same way for all students taking the test, no matter when and where they are used.
- Non-standardized (teacher-made) assessments are developed for individual student performance and are not used for comparison with other students |
|
|
Term
What is the difference between validity and reliability? Specify the three types of validity. |
|
Definition
- Validity is the degree to which an assessment measures what it intends to measure.
There are three types of validity: content-related, criterion-related, and construct-related
-reliability refers to the consistency of an assessment measure when repeated over a period of time (ie. whether it is a precise measure of a student's performance) |
|
|
Term
What are the differences among measurement error, systematic errors, and random errors? |
|
Definition
-Measurement error is the difference between the true score and the observed score
- Systematic errors represent a consistent variation from the true score
- Random errors are unpredictable as they are introduced by factors that vary from one assessment situation to another |
|
|
Term
What are the different types of reliability measures? |
|
Definition
-Test-retest reliability is the extent to which a test yields the same performance when students are given it on two different occasions
- Alternate forms reliability involves giving different forms of the same test on two different occasions to the same group of students to determine how consistent the scores are
- Internal consistency reliability is a measure of how consistently a concept is measured across the items on a test
- Decision consistency is the degree of consistency of classification decisions across a number of similar criterion-referenced tested |
|
|
Term
What are the guidelines for fair and equitability in student assessment and evaluation os students? |
|
Definition
The criteria for determining fairness and equitability in student assessment and evaluation have been outlined as principles and guidelines in Principles for Fair Student Assessment Practices for Education in Canada. |
|
|
Term
What is the role of assessment in the IEP process? |
|
Definition
The following are the typical assessment and evaluation steps in the IEP process:
1. Identification - Teacher is aware that a student is having difficulty with learning and/or behaviour
2. Diagnostic Instruction - Teacher adjusts instruction or management methods to determine if this will alleviate the students difficulties
3. Referral - If the adjustments do not resolve the student's difficulties, the teacher refers the student to the school-based team and presents relevant information about the student
4. Assessment/IEP - Data regarding current level of functioning are gathered via psycho-educational testing. Assessment results are used to develop an IEP.
5. Educational Intervention - Based on the IEP, the techer provides the student with appropriate educational interventions (accommodations and/or modifications).
6.Evaluation of Student Progress - Teacher uses a variety of assessment tools to determine the student's progress. Evaluation compares the student performance with objectives outlined in the IEP and it is determined if further assessment ir changes to the IEP are needed. |
|
|
Term
What is diagnostic or before-instruction assessment? |
|
Definition
- Diagnostic or before-instruction assessment provides teachers with information about their students' academic, social, attitudinal and behavioural characteristics to enhance communication and instruction in the classroom.
- Diagnostic or before-instruction assessment generally occurs at the beginning of the school year or before a unit of intrstuction |
|
|
Term
What is formative or during-instruction assessment? |
|
Definition
- Formative or during-instruction assessment is the process of collecting data to improve student's learning while instruction is taking place. |
|
|
Term
Describe the differences between assessment for/as/of learning. |
|
Definition
- Assessment for learning gives teachers information to modify instruction to differentiate ad focus how individual students approach their learning.
- Assessment as learning emphasizes using assessment as a process of developing and supporting metacognition for students.
- Assessment of learning is used to confirm what students know, to demonstrate whether or not students have met the standards, and/or to show they are placed in relation to others. |
|
|
Term
Why do teachers gather information about students before instruction and what are the sources of this information? |
|
Definition
- Teachers gather information about the characteristics of their students as thois determines the ways in which they plan, teach, assess, and evaluate.
-Prior to instruction, teachers collect personal observations, official school records, comments from other teachers, and formal assessments as sources of learning about students.
- Teachers also glean information from what students say (Ex: class discussions), what students do (ex: group work) and what students write (ex. journals). |
|
|
Term
How do teachers form student descriptions based on diagnostic or before-instruction assessment? |
|
Definition
- Student descriptions rely on informal and formal information and convey a perception about many dimensions of student behaviour and background
- Information about student behaviour and performance gained from diagnostic or before-instruction assessments set up expectations that influence the wa a teacher plans, instructs, assess and evaluates students |
|
|
Term
What are the six guidelines for diagnostic or before-instruction assessment? |
|
Definition
- Be aware of the effects of diagnostic assessment
- Treat initial impressions as hypotheses
- Use direct indicators to gather information about students
- Supplement informal observations wit formal activities
- Observe for a period time
-Determine whether kinds of information are confirming |
|
|
Term
What are the four issues that contribute to concerns about ethics and accuracy of diagnostic or before-instruction assessment? |
|
Definition
-Teachers' initial impressions are stable
- Teachers' precautions of affective characteristics are not always accurate
-Teachers unintentionally communicate impressions
- Teachers perceptions create a self-fulfilling prophecy, which occurs when the teachers' expectations for a student lead them to interact with that student in a particular manner; the student observes the way the teachers interact with him or her and begins to behave in the away or at the level the teachers expect. |
|
|
Term
What are the threats to validity of diagnostic or before-instruction assessment? |
|
Definition
- Prejudgment occurs when a teacher's prior knowledge, first impression or personal prejudice and beliefs interfere with the ability to make a fair and valid assessment of a student.
- Logical error occurs when teachers focus on the wrong indicators to asses and evaluate desired student characteristics, thereby invalidating their judgments. |
|
|
Term
What are the threats to reliability of before-instruction or diagnostic assessment? |
|
Definition
-Observed samples of behaviour must capture student characteristics consistently across several settings
-Teachers must be sure to observe sufficient samples of student's behaviour before they solidify their initial perceptions and rely on these impressions for decision-making |
|
|
Term
Describe the sources of formative assessment data that are gathered for learning about instruction |
|
Definition
-Teachers gather formative assessment data by observing student's attention, questioning students about the content of the lesson, checking assigned work and giving tests
-Teachers also use informal indicators such as behavioral cues from the students, eye contact, facial expressions or body language
-Teachers should engage students in discussion that explores their understanding and provides them feedback on how to improve |
|
|
Term
How are indicators used to make decisions during instruction? |
|
Definition
- Teachers use formative assessment data to make decision about whether new instruction may proceed, and, if not, how previous instruction might change to improve student learning
- Formative assessment data help monitor factors such as interest levels of students,potential behaviour problems, appropriateness of instructional techniques, adequacy of students' answers, pace of instruction, implications of students' questions, transitions from one activity to another, suitability of examples, degree of students' comprehension, and appropriateness of ending the lesson |
|
|
Term
What are some guidelines for formative or during-instruction assessment? |
|
Definition
- Include a broad sample of students when assessing instruction
-Supplement informal assessment information with more formal information about student learning
-Gather formative assessment information from both lower and higher level instructional activities
- Take the time to gather sufficient corroborative evidence that supports the decision-making process
- Do not underestimate the value of practical knowledge |
|
|
Term
What are the six purposes for using questioning during instruction? |
|
Definition
The six purposes for using questioning during instruction are to promote:
- Attention
-Deeper processing
-Learning from peers
- Reinforcement
- Pace and control
- Diagnostic information |
|
|
Term
What are the different types of questions? |
|
Definition
- Convergent questions prompt for one single correct answer. Divergent questions prompt for many appropriate answers.
- Lower Level questions focus on factual information. Higher-level questions focus on factual information. Higher-level questions require students to perform processes such as applying |
|
|
Term
What strategies can teachers use to improve the effectiveness of their oral questioning? |
|
Definition
- Relate questions to instructional outcomes
- Avoid overly general questions
- Involve the entire class in questioning
- Distribute questions among students
- Allow wait time after asking a question
- State questions clearly and directly
- Probe responses with follow-up questions
- Treat students with encouragement and respect
- Allow private questioning for some students
- Be a good listener
- Be sensitive to cultural differences |
|
|
Term
What are some guidelines to enhance the validity and reliability of formative assessment? |
|
Definition
- Include a broad sample of students when assessing and evaluating instruction
- Supplement informal assessment information with more formal information about student learning
- Gather formative assessment information from both lower and higher level instruction activities
- Take the time to gather sufficient corroborative evidence that supports the decision-making process |
|
|
Term
What are some considerations for before and during instruction assessment of students with exceptionalities? |
|
Definition
- Curriculum-based assessment (CBA) isolates assessment to a particular aspect or small sample of the curriculum
- The performance of students with exceptionalities is tracked daily for their progress through the curriculum, not for comparative purposes. |
|
|
Term
What practice should teachers take into account to prepare students for tests? |
|
Definition
- Do not teach students the exact items that will be on the test
- Give students practise with question formats prior to testing
- Bear in mind that there are times when students are able to perform better on tests than others
- Give students information about the nature of the test |
|
|
Term
What is the difference between preparing students for a test and teaching to the test? |
|
Definition
- Preparing students for a test involves helping students to learn the general skills, knowledge, and processes that they need to answer the questions on a test.
- Teaching to the test involves providing students with the answers to specific questions that will appear on the test |
|
|
Term
What are some general test-taking strategies that students should employ? |
|
Definition
- Keep a personal dictionary with a list of unfamiliar words with definitions, synonyms, and contextual use
- Use practise tests available to run through reading a selection and completing its questions, then check your answers. Try to determine why a certain answer is correct and why the other choices are incorrect.
- Make notes in the margins of ideas or reminders.
- Read all material included with the reading selection
- Cross out or eliminate choices |
|
|
Term
What are some strategies that students can use to help them write multiple choice exams? |
|
Definition
- Preparing students for a test involves helping students to learn the general skills, knowledge, and processes that they need to answer the questions on a test
- Teaching to the test involves providing students with the answers to specific questions that will appear on the test |
|
|
Term
What are some general test-taking strategies that students should employ? |
|
Definition
- Read test directions carefully
- Find out how questiond will ne scored. Will ll questions count equally? Will points be taken off for spelling, grammar, neatness?
- Pace yourself to ensure that you can complete the test
- Plan and organize any essay questions before writing
- Attempt to answer all questions. If guessing is not penalized, guess when you don't know the answer
- When using a separate answer sheet, check often to make certain that you are marking your responses in the correct space
- Be well-rested at the time of testing by avoiding late-night cram sessions |
|
|
Term
What are three considerations for test administration? |
|
Definition
- Physical setting-Ensure students have a quiet testing environment with minimal interruptions
- Psychological setting-Reduce student anxiety by providing advance notice of a test, preparation and review
- Keeping track of time - Remind studetns of the test time remaining with announcements |
|
|
Term
What are some methods that students use to cheat on tests? |
|
Definition
- Look at another students test paper during a test
- Passing an eraser between two students who write test information on the eraser
- Looking at students papers while walking up to the teacher to ask a question about the test
- Using crib notes or small pieces of paper
- Using cell phones to text between students and to access the Internet for answers. |
|
|
Term
What is testwiseness and what are some strategies that minimize the influence of testwiseness? |
|
Definition
-Testwiseness refers to a person's capacity to use the characteristics and formats of the text and/ior the test taking situation to receive a high score
To minimize testwiseness:
-Order of answer options should be logical or vary
-Answer options should all be grammatically consistent with stem
- Specific determiners (ex always, never) should not be used
- Answer options should be homogenous
-Correct answer options should not always be the longest answer option
-Items should be independent of each other
-Stems and examples should not be directly taken from the textbook -Answer options should be logically independent of one another
- There should be an equal number of true and false statements
- True and false statement should be of equal length |
|
|
Term
How might teachers discourage cheating? |
|
Definition
- Consider seating arrangements
- Know common methods of student cheating -
Carefully proctor during testing
- Reprimand students who do cheat |
|
|
Term
What is the difference between objective scoring and subjective scoring? |
|
Definition
-With objective scoring, independent scorers usually arrive at the same or very similar scores.
- with subjective scoring, independent scores have difficulty arriving at the same or similar scores. |
|
|
Term
What are some methods for scoring selected response, short-answer and completion and essay items? |
|
Definition
Selected response items:
- Prepare an answer key and ensure that it is correct
Short answer and completion items:
- Prepare an answer key
- Determine how factors such as spelling, grammar and punctuation will be handled in scoring
- Be prepared to consider unexpected responses and decide if these responses are the result of faulty test items or a lack of student learning
Essay items:
-Define what constitutes a good answer and decide on the scoring method (*analytic or holistic)
- Be alert to factors that might affect the objectivity of essay scoring such as students' writing style, grammar and spelling, neatness, scorer fatigue, prior performance, students' identity, and carryover effects
- Ensure that the response scoring criteria match the essay question
- Tell students how handwriting, punctuation, spelling and organization will be scored
- If possible, score students' tests anonymously
- Score all responses to the first essay question before moving on to score succeeding questions
- Read essay questions a second time after initial socring |
|
|
Term
What are some steps for ensuring scoring objectivity? |
|
Definition
- Ensure that the repose scoring criteria match the essay question
- Decide on and tell students how handwriting, punctuation, spelling and organization will be scored
- If possible, score students anonymously
- In tests with multiple essay items, score all student answers to the first question before moving to the second question
- Read essay answers a second time after initial scoring |
|
|
Term
Why should teachers perform after test analyses? |
|
Definition
- To identify and make scoring adjustments for any items that students answers show were misunderstood or ambiguous
- To identify ways to improve items for use on future tests |
|
|
Term
What are the difficulty index and the discrimination index? |
|
Definition
- The difficulty index of an item describes the proportion of students who answered the item correctly
- The discrimination index describes how an individual item fares with students who scored high and low on the overall test |
|
|
Term
What are the accommodations in testing that can be made for students with exceptionalities? |
|
Definition
- Presentation format (read directions and test questions, simplify reading level)
- Response format (tape record responses, provide a scribe, provide models, outlines, or formulas)
- Test timing (provide extra time, give breaks, test over a period of days)
- Test setting(one on one, away from distractions) |
|
|
Term
What are summative or after-instruction assessments? |
|
Definition
- Summative assessments, also called assessments of learning, are assessments about students' learning at the end of instruction. They usually take the form of tests, projects, papers or examinations
- Summative or after-instruction assessments are used to confirm what studetns know, determine whether they have met standards, and/or show their placement relative to others |
|
|
Term
What are the two basic types of written test items? |
|
Definition
- The two basic types of test items are selected response and constructed responses
- Selected response items are those in which the student selects the correct answer from among a number of choices. Examples include multiple-choice items, alternate response items, matching items, and interpretive exercises
- Constructed response items are those that require students to construct their own answer. Examples include short-answer items, completion items and essay items |
|
|
Term
What are the parts of any multiple-choice item? |
|
Definition
- Multiple-choice items consist of a stem, which presents the problem or question, and a set of options from which students select an answer
- Among the set of options, there is one correct option and usually at least three incorrect (but reasonable) options, or distractors |
|
|
Term
What are the specific determiners |
|
Definition
- Specific determiners are clues in alternate response questions that help students answer the question correctly
- Examples of specific determines are words such as always, never all and none which tend to appear in statements that are false and words like some, sometimes, and may, which tend to appear in statements that are true
- Avoid specific determiners in the construction of test items |
|
|
Term
What are the similarities and differences among diagnostic or instruction, formative or during-instruction, and summative or after instruction assessments? |
|
Definition
- Diagnostic assessment is used to determine the ways in which teachers plan, teach, assess, and evaluate; formative assessment is used to monitor and guide instruction and learning while it is still in progress; and summative assessment is used to judge the success of instruction at its completion
- Diagnostic assessment occurs before instruction, formative assessment occurs during instruction, and summative assessment occurs after instruction
- Diagnostic assessment uses observations, school records, disvussions, quizzes, and comments; formative assessment uses observations, questions, seatwork, quizzes and homework; and summative assessment uses test, projects, papers and examinations
- Diagnostic assessment can be used to hone the content and method of instruction. Formative assessment can be used to improve and change instruction and learning while it is going on. Summative assessment can be used to judge the overall success of instruction and to grade, place or promote. |
|
|
Term
What are some of the advantages and disadvantages of different types of selected and constructed response test items? |
|
Definition
- Multiple choice: advantages include large numbers of items and quick, objective scoring; disadvantages include substantial time to construct items and difficulty in finding suitable options
- Alternate response: advantages include large number of items ad quick, objective scoring; disadvantages include student guessing and focus on recall
- Matching: advantages include ease of construction and quick, objective scoring; disadvantages include focus on lower-level outcomes and requirement of homogenous topics
- Short-answer: advantages include ease of construction and breadth of knowledge that can be assessed; disadvantages include time-consuming scoring and lack of usefulness for assessing complex outcomes
- Interpretive exercise: advantages include ability to asses higher-level outcomes and quick, objective scoring; disadvantages include substantial time to construct items and dependance on students' reading ability |
|
|
Term
How can questions be written to asses higher levels of thinking? |
|
Definition
- Higher-level thinking questions assess cognitive processes beyond simple recognition or recall
- Such questions require students, for example, to solve a problem, interpret a chart, explain a concept, or identity a relationship between two phenomena.
- Essay questions are an important tool for assessing higher-level thinking |
|
|
Term
What are guidelines for constructing or selecting an interpretive exercise? |
|
Definition
- The exercise should be related to the instruction provided to the students
- The material presented in the exercise should be new to the students, but similar to material presented during instruction
- There should be sufficient information for students to answer the exercise, but it should not be test of reading speed and accuracy
- The correct answer should not be found directly in the material presented within the exercise. Interpretation, understanding, application, analysis and evaluation should be needed to determine correct answer.
- Each interpretive exercise should include more than one question to make most efficient use of time. |
|
|
Term
For which levels of the Revised Bloom's Taxonomy should test questions be written? |
|
Definition
- Test items should be written for all levels that reflect the content and level of thinking to which the students have been exposed during instruction |
|
|
Term
What are some rules for writing sound test items? |
|
Definition
- Avoid ambiguous and confusing wording and sentence structure - Use appropriate vocabulary - Keep questions short and to the point - Write items that have one correct answer - Give information about the nature of the desired answer - Do not provide clues to the correct answer - Do not overcomplicate test items |
|
|