Domain III: Competency 010 (PPR)
Struggling reader
A Typical Definition of Struggling Reader Students who are considered to be struggling readers typically read one or more years below their current grade-level but do not have an identified learning disability of any kind. They are often perceived as lacking the skills other students possess and use with little difficulty such as analyzing information, defining vocabulary words, or applying comprehension strategies. The difficulties they have with reading are typically attributed to inadequate instruction or from their own individual failures to fully engage with and learn from texts and instruction. Overall, struggling readers are often portrayed as students with problems with experience failure with reading in school and have little to offer. It is commonly assumed that the problem resides within them and that curriculum and instruction can simply be adjusted in ways that can "fix" them - should they choose to fully engage with it.
Norm-referenced test (NRT)
A norm-referenced test (NRT) is a type of test, assessment, or evaluation which yields an estimate of the position of the tested individual in a predefined population, with respect to the trait being measured. The estimate is derived from the analysis of test scores and possibly other relevant data from a sample drawn from the population. That is, this type of test identifies whether the test taker performed better or worse than other test takers, not whether the test taker knows either more or less material than is necessary for a given purpose. The term normative assessment refers to the process of comparing one test-taker to his or her peers. Norm-referenced assessment can be contrasted with criterion-referenced assessment and ipsative assessment. In a criterion-referenced assessment, the score shows whether or not test takers performed well or poorly on a given task, not how that compares to other test takers; in an ipsative system, test takers are compared to previous performance. The same test can be used in both ways. Robert Glaser originally coined the terms norm-referenced test and criterion-referenced test.
Raw score
A raw score indicates the number of points a student earned on a test. For example, on a STAR test each question is worth 1 point, so if a student correctly answered 30 questions out of 50, his or her raw score would be 30. When a student takes a CMA, CST, or STS (all STAR tests), he or she receives a raw score (number of items correct) for each content cluster and also for the test as a whole. This raw score can also be translated into percent correct for content clusters, but not for the test as a whole. When a student takes a CAPA (another STAR test), he or she receives a raw score (total of scores earned on tasks) only for the test as a whole. The raw score for the overall test is transformed into a scale score (also called scaled score) through an equating process that allows a test (same subject, same grade level, etc.) to represent the same level of difficulty from one year to the next. Scale scores then translate into proficiency levels (Far Below Basic, Below Basic, Basic, Proficient, and Advanced), which also represent the same level of difficulty for the same test from one year to the next (scale scores and proficiency levels are only available for the test as a whole, and not for content clusters). Thus raw scores do not translate directly to proficiency levels on state tests like they typically do on local assessments.
Standardized test
A standardized test is any form of test that (1) requires all test takers to answer the same questions, or a selection of questions from common bank of questions, in the same way, and that (2) is scored in a "standard" or consistent manner, which makes it possible to compare the relative performance of individual students or groups of students. While different types of tests and assessments may be "standardized" in this way, the term is primarily associated with large-scale tests administered to large populations of students, such as a multiple-choice test given to all the eighth-grade public-school students in a particular state, for example. In addition to the familiar multiple-choice format, standardized tests can include true-false questions, short-answer questions, essay questions, or a mix of question types. While standardized tests were traditionally presented on paper and completed using pencils, and many still are, they are increasingly being administered on computers connected to online programs (for a related discussion, see computer-adaptive test). While standardized tests may come in a variety of forms, multiple-choice and true-false formats are widely used for large-scale testing situations because computers can score them quickly, consistently, and inexpensively. In contrast, open-ended essay questions need to be scored by humans using a common set of guidelines or rubrics to promote consistent evaluations from essay to essay—a less efficient and more time-intensive and costly option that is also considered to be more subjective. (Computerized systems designed to replace human scoring are currently being developed by a variety of companies; while these systems are still in their infancy, they are nevertheless becoming the object of growing national debate.) While standardized tests are a major source of debate in the United States, many test experts and educators consider them to be a fair and objective method of assessing the academic achievement of students, mainly because the standardized format, coupled with computerized scoring, reduces the potential for favoritism, bias, or subjective evaluations. On the other hand, subjective human judgment enters into the testing process at various stages—e.g., in the selection and presentation of questions, or in the subject matter and phrasing of both questions and answers. Subjectivity also enters into the process when test developers set passing scores—a decision that can affect how many students pass or fail, or how many achieve a level of performance considered to be "proficient." For more detailed discussions of these issue, see measurement error, test accommodations, test bias and score inflation. Standardized tests may be used for a wide variety of educational purposes. For example, they may be used to determine a young child's readiness for kindergarten, identify students who need special-education services or specialized academic support, place students in different academic programs or course levels, or award diplomas and other educational certificates. The following are a few representative examples of the most common forms of standardized test: Achievement tests are designed to measure the knowledge and skills students learned in school or to determine the academic progress they have made over a period of time. The tests may also be used to evaluate the effectiveness of a schools and teachers, or identify the appropriate academic placement for a student—i.e., what courses or programs may be deemed most suitable, or what forms of academic support they may need. Achievement tests are "backward-looking" in that they measure how well students have learned what they were expected to learn. Aptitude tests attempt to predict a student's ability to succeed in an intellectual or physical endeavor by, for example, evaluating mathematical ability, language proficiency, abstract reasoning, motor coordination, or musical talent. Aptitude tests are "forward-looking" in that they typically attempt to forecast or predict how well students will do in a future educational or career setting. Aptitude tests are often a source of debate, since many question their predictive accuracy and value. College-admissions tests are used in the process of deciding which students will be admitted to a collegiate program. While there is a great deal of debate about the accuracy and utility of college-admissions tests, and many institutions of higher education no longer require applicants to take them, the tests are used as indicators of intellectual and academic potential, and some may consider them predictive of how well an applicant will do in postsecondary program. International-comparison tests are administered periodically to representative samples of students in a number of countries, including the United States, for the purposes of monitoring achievement trends in individual countries and comparing educational performance across countries. A few widely used examples of international-comparison tests include the Programme for International Student Assessment (PISA), the Progress in International Reading Literacy Study (PIRLS), and the Trends in International Mathematics and Science Study (TIMSS). Psychological tests, including IQ tests, are used to measure a person's cognitive abilities and mental, emotional, developmental, and social characteristics. Trained professionals, such as school psychologists, typically administer the tests, which may require students to perform a series of tasks or solve a set of problems. Psychological tests are often used to identify students with learning disabilities or other special needs that would qualify them for specialized services.
Portfolio
A student portfolio is a compilation of academic work and other forms of educational evidence assembled for the purpose of (1) evaluating coursework quality, learning progress, and academic achievement; (2) determining whether students have met learning standards or other academic requirements for courses, grade-level promotion, and graduation; (3) helping students reflect on their academic goals and progress as learners; and (4) creating a lasting archive of academic work products, accomplishments, and other documentation. Advocates of student portfolios argue that compiling, reviewing, and evaluating student work over time can provide a richer, deeper, and more accurate picture of what students have learned and are able to do than more traditional measures—such as standardized tests, quizzes, or final exams—that only measure what students know at a specific point in time. Portfolios come in many forms, from notebooks filled with documents, notes, and graphics to online digital archives and student-created websites, and they may be used at the elementary, middle, and high school levels. Portfolios can be a physical collection of student work that includes materials such as written assignments, journal entries, completed tests, artwork, lab reports, physical projects (such as dioramas or models), and other material evidence of learning progress and academic accomplishment, including awards, honors, certifications, recommendations, written evaluations by teachers or peers, and self-reflections written by students. Portfolios may also be digital archives, presentations, blogs, or websites that feature the same materials as physical portfolios, but that may also include content such as student-created videos, multimedia presentations, spreadsheets, websites, photographs, or other digital artifacts of learning. Online portfolios are often called digital portfolios or e-portfolios, among other terms. In some cases, blogs or online journals may be maintained by students and include ongoing reflections about learning activities, progress, and accomplishments. Portfolios may also be presented—publicly or privately—to parents, teachers, and community members as part of a demonstration of learning, exhibition, or capstone project.
Diagnostic tests
Educational diagnostic testing is a form of assessment that occurs before instruction begins. The purpose of administering diagnostic tests is to try to determine what students already know about the concepts and skills to be covered by instruction. The tests are not graded. The tests can determine if differentiated instruction is need, and discover students' preferred learning styles as well as their strengths, weaknesses, and misconceptions. Diagnostic tests are designed to closely follow what will be asked on a summative assessment and can be used to predict how well students will perform on high-stakes tests used to meet No Child Left Behind guidelines and state standards. In this respect, they can be considered a combination of both summative and formative assessments.
Criterion referenced tests
Criterion-referenced tests and assessments are designed to measure student performance against a fixed set of predetermined criteria or learning standards—i.e., concise, written descriptions of what students are expected to know and be able to do at a specific stage of their education. In elementary and secondary education, criterion-referenced tests are used to evaluate whether students have learned a specific body of knowledge or acquired a specific skill set. For example, the curriculum taught in a course, academic program, or content area. If students perform at or above the established expectations—for example, by answering a certain percentage of questions correctly—they will pass the test, meet the expected standards, or be deemed "proficient." On a criterion-referenced test, every student taking the exam could theoretically fail if they don't meet the expected standard; alternatively, every student could earn the highest possible score. On criterion-referenced tests, it is not only possible, but desirable, for every student to pass the test or earn a perfect score. Criterion-referenced tests have been compared to driver's-license exams, which require would-be drivers to achieve a minimum passing score to earn a license. Criterion-Referenced vs. Norm-Referenced Tests Norm-referenced tests are designed to rank test takers on a "bell curve," or a distribution of scores that resembles, when graphed, the outline of a bell—i.e., a small percentage of students performing poorly, most performing average, and a small percentage performing well. To produce a bell curve each time, test questions are carefully designed to accentuate performance differences among test takers—not to determine if students have achieved specified learning standards, learned required material, or acquired specific skills. Unlike norm-referenced tests, criterion-referenced tests measure performance against a fixed set of criteria. Criterion-referenced tests may include multiple-choice questions, true-false questions, "open-ended" questions (e.g., questions that ask students to write a short response or an essay), or a combination of question types. Individual teachers may design the tests for use in a specific course, or they may be created by teams of experts for large companies that have contracts with state departments of education. Criterion-referenced tests may be high-stakes tests—i.e., tests that are used to make important decisions about students, educators, schools, or districts—or they may be "low-stakes tests" used to measure the academic achievement of individual students, identify learning problems, or inform instructional adjustments. Well-known examples of criterion-referenced tests include Advanced Placement exams and the National Assessment of Educational Progress, which are both standardized tests administered to students throughout the United States. When testing companies develop criterion-referenced standardized tests for large-scale use, they usually have committees of experts determine the testing criteria and passing scores, or the number of questions students will need to answer correctly to pass the test. Scores on these tests are typically expressed as a percentage. It should be noted that passing scores—or "cut-off scores"—on criterion-referenced tests are judgment calls made by either individuals or groups. It's theoretically possible, for example, that a given test-development committee, if it had been made up of different individuals with different backgrounds and viewpoints, would have determined different passing scores for a certain test. For example, one group might determine that a minimum passing score is 70 percent correct answers, while another group might establish the cut-off score at 75 percent correct. For a related discussion, see proficiency. Criterion-referenced tests created by individual teachers are also very common in American public schools. For example, a history teacher may devise a test to evaluate understanding and retention of a unit on World War II. The criteria in this case might include the causes and timeline of the war, the nations that were involved, the dates and circumstances of major battles, and the names and roles of certain leaders. The teacher may design a test to evaluate student understanding of the criteria and determine a minimum passing score. While criterion-referenced test scores are often expressed as percentages, and many have minimum passing scores, the test results may also be scored or reported in alternative ways. For example, results may be grouped into broad achievement categories—such as "below basic," "basic," "proficient," and "advanced"—or reported on a 1-5 numerical scale, with the numbers representing different levels of achievement. As with minimum passing scores, proficiency levels are judgment calls made by individuals or groups that may choose to modify proficiency levels by raising or lowering them. The following are a few representative examples of how criterion-referenced tests and scores may be used: To determine whether students have learned expected knowledge and skills. If the criterion-referenced tests are used to make decisions about grade promotion or diploma eligibility, they would be considered "high-stakes tests." To determine if students have learning gaps or academic deficits that need to be addressed. For a related discussion, see formative assessment. To evaluate the effectiveness of a course, academic program, or learning experience by using "pre-tests" and "post-tests" to measure learning progress over the duration of the instructional period. To evaluate the effectiveness of teachers by factoring test results into job-performance evaluations. For a related discussion, see value-added measures. To measure progress toward the goals and objectives described in an "individualized education plan" for students with disabilities. To determine if a student or teacher is qualified to receive a license or certificate. To measure the academic achievement of students in a given state, usually for the purposes of comparing academic performance among schools and districts. To measure the academic achievement of students in a given country, usually for the purposes of comparing academic performance among nations. A few widely used examples of international-comparison tests include the Programme for International Student Assessment (PISA), the Progress in International Reading Literacy Study (PIRLS), and the Trends in International Mathematics and Science Study (TIMSS).
Diagnostic assessment
Diagnostic assessment is a form of pre-assessment that allows a teacher to determine students' individual strengths, weaknesses, knowledge, and skills prior to instruction. It is primarily used to diagnose student difficulties and to guide lesson and curriculum planning.
Homework assignments
Homework, or a homework assignment, is a set of tasks assigned to students by their teachers to be completed outside the class. Common homework assignments may include a quantity or period of reading to be performed, writing or typing to be completed, math problems to be solved, material to be reviewed before a test, or other skills to be practiced. The effect of homework is debated. Generally speaking, homework does not improve academic performance among children and may improve academic skills among older students. It also creates stress for students and their parents and reduces the amount of time that students could spend outdoors, exercising, playing sports, working, sleeping or in other activities.
Formative assessment
Formative assessment, including diagnostic testing, is a range of formal and informal assessment procedures conducted by teachers during the learning process in order to modify teaching and learning activities to improve student attainment. It typically involves qualitative feedback (rather than scores) for both student and teacher that focuses on the details of content and performance. It is commonly contrasted with summative assessment, which seeks to monitor educational outcomes, often for purposes of external accountability.
Guided reading groups
Guided reading is 'small-group reading instruction designed to provide differentiated teaching that supports students in developing reading proficiency'. The small group model allows children to be taught in a way that is intended to be more focused on their specific needs, accelerating their progress.
Organizational skills
HR, WORKPLACE the ability to use your time, energy, resources, etc. in an effective way so that you achieve the things you want to achieve: Self-discipline and organizational skills are crucial to success in any profession.
Holistic score
In holistic scoring, scorers evaluate the effectiveness of responses in terms of a set of overall descriptions of signed communication. The scoring process is holistic in that the score assigned to your signed performance reflects the overall effectiveness of your communication. For the TASC, "signed communication proficiency" is defined as the ability to communicate successfully, both expressively and receptively. At least three scorers view each taped interview and, working in collaboration, rate your proficiency. Performance at a Level C or higher (Level A or B) is required to pass the TASC. See the TASC Holistic Rating Scale.
Tier-two instruction
In recent years, schools have begun to implement schoolwide prevention models in an effort to increase student achievement, particularly in the areas of reading and math, referred to as Response to Intervention (RTI; Brown-Chidsey & Steege, 2010; Haager, Klingner, & Vaughn, 2007; Jimerson, Burns, & VanDerHeyden, 2007; NASDSE, 2005). Various terms have been used to describe such models, depending on whether the focus is on the academic or behavioral outcomes of students. RTI has often been used to describe an academic tiered model, whereas positive behavioral interventions and support (PBIS) is used to describe a behavioral teired model. Recently, the term multi-tier system of supports (MTSS) has emerged to describe the framework in schools that provides both academic and behavioral support for students. The term RTI has evolved to refer to the process for academic decision making. In this article, we discuss the academic side of an MTSS. MTSS is a schoolwide approach that establishes a seamless connection between three components: (1) a viable, standards-aligned curriculum and research-based instructional practices; (2) a comprehensive assessment system; and (3) use of the problem-solving model. MTSS commonly employs three layers of instruction, called tiers, which are used to match the level of instruction students need to their level of support. Students' growth is monitored and instructional placements are adjusted accordingly if students do not make adequate progress (Jimerson et al., 2007). Within an MTSS, all students receive core instruction as the foundation for learning. Those students at-risk for academic failure on the basis of their performance (and validation of their performance) on screening assessments are then provided supplemental support. This first layer of additional support, Tier 2, occurs outside of the time dedicated to core instruction, in groups of 5-8 students, and focuses primarily on providing increased opportunities to practice and learn skills taught in the core (Baker, Fien, & Baker, 2010; Vaughn, Wanzek, Woodruff, & Linan-Thompson, 2007). When Tier 2 is insufficient to meet student need, students are provided Tier 3. Compared to Tier 2, Tier 3 is more explicit, focuses on remediation of skills, is provided for a longer duration of time (both in overall length of intervention and regularly scheduled minutes of instructional time), and occurs in smaller groups (i.e., groups of 1-3 students; Haager et al., 2007; Harn, Kame'enui, & Simmons, 2007; Vaughn, Linan-Thompson, & Hickman, 2003). One of the primary differences between MTSS and traditional service delivery models is the use of levels of instructional support to flexibly group students according to need. Having instructional tiers ensures proper support for each student because schools are able to match more intensive instruction and resources to students with more intensive needs and less intensive instruction and resources to students with less intensive needs. However, understanding the differences between tiers, particularly Tier 2 and Tier 3, may be difficult for several reasons related to the varying descriptions of Tier 2 and Tier 3 in the literature. For example, some researchers have described Tier 2 instruction as occurring two to three times per week and Tier 3 instruction as occurring daily (Brown-Chidsey, Bronaugh, & McGraw, 2009), whereas others have described both Tier 2 and Tier 3 support as occurring daily (Chard & Harn, 2008; Denton, Fletcher, Simos, Papanicolaou, & Anthony, 2007). Others have described Tier 2 support as utilizing small groups comprising 4-8 students and Tier 3 as groups of 1-3 students (Chard & Harn, 2008), yet others have described both Tier 2 and Tier 3 support as groups comprising 1-3 students (Algozzine, Cooke, White, Helf, Algozzine, & McClanahan, 2008; Denton et al., 2007). In addition to varying descriptions of Tier 2 and Tier 3 in the literature, there is also a lack of clarity around certain aspects of providing supplemental support. For example, Gessler Werts, Lambert, and Carpenter (2009) surveyed special education directors about RTI. Although 75% of the respondents reported that they had received formal training on RTI, there was limited consensus on various topics, including the amount of time needed for the delivery of Tier 2 and Tier 3 sessions. Other sites have reported needing more training on how to modify instruction that is currently in place (Bollman, Silberglitt, & Gibbons, 2007; Greenfield, Rinaldi, Proctor, & Cardarelli, 2010). In particular, sites need further clarification on how to intensify instruction at Tier 2 and Tier 3 that is more than just changing the intervention program used, such as learning how and when to modify pacing and group sizes and how to improve the coordination of instruction in the school (Callender, 2007). Additionally, sites often go through a trial and error process as they try out different processes within the model to find one that fits their site (Dulaney, 2012; White, Polly, & Audette, 2012). Dulaney (2012) described the implementation of RTI within a middle school, and a major finding was that the school went through an evolution process to find a clear procedure for implementing Tier 2 and Tier 3. Prewett et al. (2012) reported similar findings in their analysis of secondary RTI implementation. They analyzed implementation among 17 middle schools and many of the schools reported that they would try one method, only to refine it later after gaining more experience with RTI. Such dynamics can lead to slower implementation and results because sites have to spend time with processes or practices that aren't as efficient or effective as others. Unfortunately, schools do not have time to waste as different processes are tried, and students certainly can't wait either. The lowest 10% of readers in the middle of 1st grade are likely to stay the lowest 10% of readers unless they receive additional support (Good, Simmons, & Smith, 1998), and 74% of poor readers in the 3rd grade likely remain poor readers in the 9th grade (Fletcher & Lyon, 1998). Understanding how to intensify instruction between tiers and knowing how to match expended resources to student need is critical because studies indicate at-risk students make substantial gains in achievement and may even catch up to peers that are on-track when instruction is sufficiently intense (Harn et al., 2007; Vaughn et al., 2003). What would benefit sites in designing their MTSS frameworks and RTI models is a clear picture of how Tier 2 is different from Tier 3 along several dimensions, including (a) the group size, (b) the processes for monitoring the effectiveness and fidelity of the tiers, (c) what instructional adjustments can be made between the tiers (Greenfield et al., 2010; Murakami-Ramalho & Wilcox, 2012), and (d) ways to schedule time for the additional support (Dulaney, 2012; Greenfield et al., 2010; Swanson, Solis, Ciullo, & McKenna, 2012). To bring more clarity to the literature and to assist schools with understanding particular aspects of an academic MTSS framework, the differences between Tier 2 and Tier 3 support are presented in this article.
Informal assessment
Informal assessment is a procedure for obtaining information that can be used to make judgements about children's learning behavior and characteristics or programs using means other than standardized instruments. Observations, checklists, and portfolios are just some of the informal methods of assessment available to early childhood educators. The table below outlines methods for informal assessment, their purposes, and guidelines for using them.
Special Education teacher
Job Duties: Special education teachers typically do the following: Assess students' skills to determine their needs and to develop teaching plans Adapt lessons to meet the needs of students Develop Individualized Education Programs (IEPs) for each student Plan, organize, and assign activities that are specific to each student's abilities Teach and mentor students as a class, in small groups, and one-on-one Implement IEPs, assess students' performance, and track their progress Update IEPs throughout the school year to reflect students' progress and goals Discuss student's progress with parents, teachers, counselors, and administrators Supervise and mentor teacher assistants who work with students with disabilities Prepare and help students transition from grade to grade and after graduation Career Overview: Special education teachers work with students who have a wide range of learning, mental, emotional, and physical disabilities. They adapt general education lessons and teach various subjects, such as reading, writing, and math, to students with mild and moderate disabilities. They also teach basic skills, such as literacy and communication techniques, to students with severe disabilities. Duties Special education teachers typically do the following: Assess students' skills to determine their needs and to develop teaching plans Adapt lessons to meet the needs of students Develop Individualized Education Programs (IEPs) for each student Plan, organize, and assign activities that are specific to each student's abilities Teach and mentor students as a class, in small groups, and one-on-one Implement IEPs, assess students' performance, and track their progress Update IEPs throughout the school year to reflect students' progress and goals Discuss student's progress with parents, teachers, counselors, and administrators Supervise and mentor teacher assistants who work with students with disabilities Prepare and help students transition from grade to grade and after graduation Special education teachers work as part of a team that typically includes general education teachers, counselors, school superintendents, and parents. As a team, they develop individualized educational programs (IEPs) specific to each student's needs. IEPs outline goals and services for each student, such as sessions with the school psychologists, counselors, and special education teachers. Teachers also meet with parents, school administrators, and counselors to discuss updates and changes to the IEPs. Special education teachers' duties vary by the type of setting they work in, student disabilities, and teacher specialty. Some special education teachers work in classrooms or resource centers that only include students with disabilities. In these settings, teachers plan, adapt, and present lessons to meet each student's needs. They teach students in small groups or on a one-on-one basis. Students with disabilities may attend classes with general education students, also known as inclusive classrooms. In these settings, special education teachers may spend a portion of the day teaching classes together with general education teachers. They help present the information in a manner that students with disabilities can more easily understand. They also assist general education teachers to adapt lessons that will meet the needs of the students with disabilities in their classes. Special education teachers also collaborate with teacher assistants, psychologists, and social workers, to accommodate requirements of students with disabilities. For example, they may show a teacher assistant how to work with a student who needs particular attention. Special education teachers work with students who have a wide variety of mental, emotional, physical, and learning disabilities. For example, some work with students who need assistance in subject areas, such as reading and math. Others help students develop study skills, such as using flashcards and text highlighting. Some special education teachers work with students who have physical and sensory disabilities, such as blindness and deafness, and with students who are wheelchair-bound. They may also work with those who have autism spectrum disorders and emotional disorders, such as anxiety and depression. Special education teachers work with students from preschool to high school. Some teachers work with students who have severe disabilities until the students are 21 years old. Special education teachers help students with severe disabilities develop basic life skills, such as how to respond to questions and how to follow directions. Some teach students with moderate disabilities the skills necessary to live independently to find a job, such as managing money and time. For more information about other workers who help individuals with disabilities develop skills necessary to live independently, see the profiles on occupational therapists and occupational therapy assistants and aides. Most special education teachers use computers to keep records of their students' performance, prepare lesson plans, and update IEPs. Some teachers also use various assistive technology aids, such as Braille writers and computer software that helps them communicate with students. Work Environment: Special education teachers held about 442,800 jobs in 2012. Most special education teachers work in public schools. Some teach in magnet, charter, and private schools. Some also work with young children in childcare centers. A few work with students in residential facilities, hospitals, and students' homes. They may travel to these locations. Some teachers work with infants and toddlers at the child's home. They also teach the child's parents methods and ways to help the child develop skills. Helping students with disabilities can be highly rewarding. It also can be quite stressful—emotionally demanding and physically draining.
Miscue analysis
Miscue analysis is an assessment that helps a teacher identify the cueing systems used by a reader — the strategies a reader uses to make sense of a text. Instead of focusing on errors, miscue analysis focuses on what the student is doing right, so that he or she can learn to build on existing reading strategies. This section explains how to perform miscue analysis and how to use what you learn from it to help your students.
Multiple-choice test
Multiple choice is a form of an objective assessment in which respondents are asked to select the only correct answer out of the choices from a list. The multiple choice format is most frequently used in educational testing, in market research, and in elections, when a person chooses between multiple candidates, parties, or policies. Although E. L. Thorndike developed an early scientific approach to testing students, it was his assistant Benjamin D. Wood that developed the multiple choice test. Multiple choice testing increased in popularity in the mid-20th century when scanners and data-processing machines were developed to check the results.
Norm-referenced tests
Norm-referenced refers to standardized tests that are designed to compare and rank test takers in relation to one another. Norm-referenced tests report whether test takers performed better or worse than a hypothetical average student, which is determined by comparing scores against the performance results of a statistically selected group of test takers, typically of the same age or grade level, who have already taken the exam. Calculating norm-referenced scores is called the "norming process," and the comparison group is known as the "norming group." Norming groups typically comprise only a small subset of previous test takers, not all or even most previous test takers. Test developers use a variety of statistical methods to select norming groups, interpret raw scores, and determine performance levels. Norm-referenced scores are generally reported as a percentage or percentile ranking. For example, a student who scores in the seventieth percentile performed as well or better than seventy percent of other test takers of the same age or grade level, and thirty percent of students performed better (as determined by norming-group scores). Norm-referenced tests often use a multiple-choice format, though some include open-ended, short-answer questions. They are usually based on some form of national standards, not locally determined standards or curricula. IQ tests are among the most well-known norm-referenced tests, as are developmental-screening tests, which are used to identify learning disabilities in young children or determine eligibility for special-education services. A few major norm-referenced tests include the California Achievement Test, Iowa Test of Basic Skills, Stanford Achievement Test, and TerraNova. The following are a few representative examples of how norm-referenced tests and scores may be used: To determine a young child's readiness for preschool or kindergarten. These tests may be designed to measure oral-language ability, visual-motor skills, and cognitive and social development. To evaluate basic reading, writing, and math skills. Test results may be used for a wide variety of purposes, such as measuring academic progress, making course assignments, determining readiness for grade promotion, or identifying the need for additional academic support. To identify specific learning disabilities, such as autism, dyslexia, or nonverbal learning disability, or to determine eligibility for special-education services. To make program-eligibility or college-admissions decisions (in these cases, norm-referenced scores are generally evaluated alongside other information about a student). Scores on SAT or ACT exams are a common example. Norm-Referenced vs. Criterion-Referenced Tests Norm-referenced tests are specifically designed to rank test takers on a "bell curve," or a distribution of scores that resembles, when graphed, the outline of a bell—i.e., a small percentage of students performing well, most performing average, and a small percentage performing poorly. To produce a bell curve each time, test questions are carefully designed to accentuate performance differences among test takers, not to determine if students have achieved specified learning standards, learned certain material, or acquired specific skills and knowledge. Tests that measure performance against a fixed set of standards or criteria are called criterion-referenced tests. Criterion-referenced test results are often based on the number of correct answers provided by students, and scores might be expressed as a percentage of the total possible number of correct answers. On a norm-referenced exam, however, the score would reflect how many more or fewer correct answers a student gave in comparison to other students. Hypothetically, if all the students who took a norm-referenced test performed poorly, the least-poor results would rank students in the highest percentile. Similarly, if all students performed extraordinarily well, the least-strong performance would rank students in the lowest percentile. It should be noted that norm-referenced tests cannot measure the learning achievement or progress of an entire group of students, but only the relative performance of individuals within a group. For this reason, criterion-referenced tests are used to measure whole-group performance.
Survey test
Represents a measure of general performance only.
Interest inventory
Requires participants to indicate personal likes and dislikes.
Self-assessment
Self-assessment is a process of formative assessment during which students reflect on and evaluate the quality of their work and their learning, judge the degree to which they reflect explicitly stated goals or criteria, identify strengths and weaknesses in their work, and revise accordingly (2007, p.160)
Reading comprehension
Simply put, reading comprehension is the act of understanding what you are reading. While the definition can be simply stated the act is not simple to teach, learn or practice. Reading comprehension is an intentional, active, interactive process that occurs before, during and after a person reads a particular piece of writing. Reading comprehension is one of the pillars of the act of reading. When a person reads a text he engages in a complex array of cognitive processes. He is simultaneously using his awareness and understanding of phonemes (individual sound "pieces" in language), phonics (connection between letters and sounds and the relationship between sounds, letters and words) and ability to comprehend or construct meaning from the text. This last component of the act of reading is reading comprehension. It cannot occur independent of the other two elements of the process. At the same time, it is the most difficult and most important of the three. There are two elements that make up the process of reading comprehension: vocabulary knowledge and text comprehension. In order to understand a text the reader must be able to comprehend the vocabulary used in the piece of writing. If the individual words don't make the sense then the overall story will not either. Children can draw on their prior knowledge of vocabulary, but they also need to continually be taught new words. The best vocabulary instruction occurs at the point of need. Parents and teachers should pre-teach new words that a child will encounter in a text or aid her in understanding unfamiliar words as she comes upon them in the writing. In addition to being able to understand each distinct word in a text, the child also has to be able to put them together to develop an overall conception of what it is trying to say. This is text comprehension. Text comprehension is much more complex and varied that vocabulary knowledge. Readers use many different text comprehension strategies to develop reading comprehension. These include monitoring for understanding, answering and generating questions, summarizing and being aware of and using a text's structure to aid comprehension.
Small group instruction
Small group instruction typically refers to a teacher working with a small group of students on a specific learning objective. These groups consist of 2-4 students and provide these students with a reduced student-teacher ratio. Small group instruction usually follows whole group instruction. It allows teachers to work more closely with each student, reinforce skills learned in the whole group instruction, and check for student understanding. It allows students more of the teacher's attention and gives them a chance to ask specific questions they may have about what they learned. Teachers can use small group instruction to provide struggling students with intervention as well. There are two schools of thought when it comes to size. Bigger is better versus less is more. In education circles, the latter is definitely the more universally accepted way to go. In part because of the increased popularity of programs such as "Response to Intervention", small group instruction is now commonplace in most schools. Teachers see the value in this approach. Student-teacher ratios have always been a factor at the heart of school improvement conversations. In some ways, utilizing small group instruction on a regular basis can be a way to improve that student-teacher ratio. Small group instruction gives teachers a natural opportunity to provide targeted, differentiated instruction for small groups of students. It gives the teacher an opportunity to evaluate and assess what each student can do more closely and to build strategic plans for each student around those assessments. Students who may struggle to ask questions and participate in a whole group setting may thrive in a small group where they feel more comfortable and are not so overwhelmed. Furthermore, small group instruction is fast-paced which typically helps students maintain focus. The biggest problem with small group instruction is establishing a routine and managing the other students whom you are not working directly in. In a class of 20-30 students, you may have 5-6 small groups to work with during small group instruction time. The other groups must be working on something. Students must be taught to work independently during this time. The easiest way to ensure that this is happening is to create several engaging center activities for the other students to work on. These activities reinforce skills being taught during whole group instruction and do not require new instruction freeing the teacher to work with one specific group. Students can then rotate from one station to another with each group eventually getting the small group instruction with the teacher. Committing to making small group instruction work may not be an easy task. It is an approach that does take a lot of preparation time and effort. However, the powerful opportunities it provides can pay big dividends for your students. Ultimately, providing your students with high-quality small group instruction can make a significant academic difference for all of your students.
Benchmark test
Standardized benchmark assessments Typically, on the school-wide level, benchmark testing couples student performance with extensive reporting systems in order to break down test results by the same student categories required under the federal No Child Left Behind Act (i.e. race, income, disability, and English proficiency) in addition to providing individual progress reports at the district, school, classroom, and student levels.2 According to the California Department of Education, benchmark assessments often include performance tasks, but more frequently use "standardized administration and scoring procedures to help maintain validity, reliability, and fairness."3 Teachers usually administer common benchmark assessments to all students in the same course and grade level in the district at prescribed intervals — most often at the end of a unit of study or at the end of a quarter. "Common assessment instruments measure proficiency on subsets of standards and might include writing samples, literary responses, oral reports, demonstrations showing understanding of how-to-manuals, dramatizations, open-ended mathematics problems, memory maps, laboratory investigations, keyboarding or typing tests, and projects using specialized software in the school's computer lab."4 Teachers can use these standardized assessments to evaluate the degree to which students have mastered selected standards in both their classrooms and to compare with other grade-level classrooms in the district.
Summative assessment
Summative assessments are used to evaluate student learning, skill acquisition, and academic achievement at the conclusion of a defined instructional period—typically at the end of a project, unit, course, semester, program, or school year. Generally speaking, summative assessments are defined by three major criteria: The tests, assignments, or projects are used to determine whether students have learned what they were expected to learn. In other words, what makes an assessment "summative" is not the design of the test, assignment, or self-evaluation, per se, but the way it is used—i.e., to determine whether and to what degree students have learned the material they have been taught. Summative assessments are given at the conclusion of a specific instructional period, and therefore they are generally evaluative, rather than diagnostic—i.e., they are more appropriately used to determine learning progress and achievement, evaluate the effectiveness of educational programs, measure progress toward improvement goals, or make course-placement decisions, among other possible applications. Summative-assessment results are often recorded as scores or grades that are then factored into a student's permanent academic record, whether they end up as letter grades on a report card or test scores used in the college-admissions process. While summative assessments are typically a major component of the grading process in most districts, schools, and courses, not all assessments considered to be summative are graded. Summative assessments are commonly contrasted with formative assessments, which collect detailed information that educators can use to improve instruction and student learning while it's happening. In other words, formative assessments are often said to be for learning, while summative assessments are of learning. Or as assessment expert Paul Black put it, "When the cook tastes the soup, that's formative assessment. When the customer tastes the soup, that's summative assessment." It should be noted, however, that the distinction between formative and summative is often fuzzy in practice, and educators may have divergent interpretations and opinions on the subject. Some of the most well-known and widely discussed examples of summative assessments are the standardized tests administered by states and testing organizations, usually in math, reading, writing, and science. Other examples of summative assessments include: End-of-unit or chapter tests. End-of-term or semester tests. Standardized tests that are used to for the purposes of school accountability, college admissions (e.g., the SAT or ACT), or end-of-course evaluation (e.g., Advanced Placement or International Baccalaureate exams). Culminating demonstrations of learning or other forms of "performance assessment," such as portfolios of student work that are collected over time and evaluated by teachers or capstone projects that students work on over extended periods of time and that they present and defend at the conclusion of a school year or their high school education. While most summative assessments are given at the conclusion of an instructional period, some summative assessments can still be used diagnostically. For example, the growing availability of student data, made possible by online grading systems and databases, can give teachers access to assessment results from previous years or other courses. By reviewing this data, teachers may be able to identify students more likely to struggle academically in certain subject areas or with certain concepts. In addition, students may be allowed to take some summative tests multiple times, and teachers might use the results to help prepare students for future administrations of the test. It should also be noted that districts and schools may use "interim" or "benchmark" tests to monitor the academic progress of students and determine whether they are on track to mastering the material that will be evaluated on end-of-course tests or standardized tests. Some educators consider interim tests to be formative, since they are often used diagnostically to inform instructional modifications, but others may consider them to be summative. There is ongoing debate in the education community about this distinction, and interim assessments may defined differently from place to place.
Informal reading inventory
The Informal Reading Inventory (IRI) is an individually administered survey designed to help you determine a student's reading instructional needs. A student's performance on the IRI will help you determine the instructional level and the amount and kind of support the student is likely to need in Invitations to Literacy. Specifically, the IRI will help you assess a student's strengths and needs in these areas: word recognition word meaning reading strategies comprehension The IRI materials consist of a Student Booklet and a Test Manual. They contain word lists and reading selections for these levels of Invitations to Literacy: Levels 1.1-1.3, 1.4-1.5, 2, 3, 4, 5, and 6. In the Student Booklet, there are two or three reading passages for each level of the inventory. They are excerpts from selections at the same grade level of the reading program. The Test Manual contains the information and materials you need to administer and score the IRI. While an IRI is regarded as a suitable tool for determining students' reading abilities and needs, it is not infallible. You should use the information from the IRI and the Baseline Group Tests, along with any other information you have about a student, to make an initial instructional plan. After you have observed the student for two to three weeks, you should have a better idea of the student's reading abilities. Your observations may suggest different strengths and needs. Adjustments should be made as necessary.
Standard B:
The beginning teacher: Creates assessments that are congruent with instructional goals and objectives and communicates assessment criteria and standards to students based on high expectations for learning.
Standard A:
The beginning teacher: Demonstrates knowledge of the characteristics, uses, advantages and limitations of various assessment methods and strategies, including technological methods and methods that reflect real-world applications.
Standard D:
The beginning teacher: Knows how to promote students' ability to use feedback and self-assessment to guide and enhance their own learning.
Standard E:
The beginning teacher: Responds flexibly to various situations (e.g., lack of student engagement in an activity, the occurrence of an unanticipated learning opportunity) and adjusts instructional approaches based on ongoing assessment of student performance.
Standard C:
The beginning teacher: Uses appropriate language and formats to provide students with timely, effective feedback that is accurate, constructive, substantive and specific.
Percentile rank
The percentile rank of a score is the percentage of scores in its frequency distribution that are equal to or lower than it. For example, a test score that is greater than or equal to 75% of the scores of people taking the test is said to be at the 75th percentile, where 75 is the percentile rank. In educational measurement, a range of percentile ranks, often appearing on a score report, that shows the range within which the test taker's "true" percentile rank probably occurs. The "true" value refers to the rank the test taker would obtain if there were no random errors involved in the testing process. Percentile ranks are commonly used to clarify the interpretation of scores on standardized tests. For the test theory, the percentile rank of a raw score is interpreted as the percentages of examinees in the norm group who scored at or below the score of interest. Percentile ranks are not on an equal-interval scale; that is, the difference between any two scores is not the same between any other two scores whose difference in percentile ranks is the same. For example, 50 − 25 = 25 is not the same distance as 60 − 35 = 25 because of the bell-curve shape of the distribution. Some percentile ranks are closer to some than others. Percentile rank 30 is closer on the bell curve to 40 than it is to 20.
Surveys
The survey is a method for collecting information or data as reported by individuals. Surveys are questionnaires (or a series of questions) that are administered to research participants who answer the questions themselves. Since the participants are providing the information, it is referred to as self-report data. Surveys are used to get an idea of how a group or population feels about a number of things, such as political debates, new businesses, classes, and religious views. Additionally, surveys can be a way for people to measure how often or how little people engage in different behaviors, such as smoking or drinking alcohol.
Competency 010:
The teacher monitors student performance and achievement, provides students with timely, high-quality feedback, and responds flexibly to promote learning for all students.
Grade-equivalent score
What are grade equivalents? Grade equivalents are scores based on the performance of students in the test's norming group. The grade equivalent represents the grade level and month of the typical (median) score for students. For example, a 5th-grade student who earns a 5.9 on a norm-referenced test has earned a score similar to the 50th percentile students in the test's norming group who were in their ninth month of fifth grade. Normative data are often collected at one point in the year from students in two or more grades. To obtain scores for all months and for grades outside of the norming group, scores are interpolated and extrapolated from the actual student scores.