Assessment in the Classroom (Professional Knowledge)

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

Normal Distribution

-A pattern of educational characteristics or scores in which most scores lie in the middle range and only a few lie at either extreme. -In other words, some scores will be low and some will be high, but most scores will be moderate. -Has a bell-shaped curve and is called 'normal' because it is the most common distribution that teachers see in a classroom.

Method Bias

-Factors surrounding the administration of the test that may impact the results, such as the testing environment, the length of the test, and the assistance that can be provided by the teacher administrating the test. Example: If a student from one culture is used to, and expects to be helped on the test, but is faced with a situation in which the teacher is unable to provide any guidance, this may lead to inaccurate test results.

Performance Assessment

-Formal assessment in which students demonstrate their knowledge and skills in a non-written fashion. -These are focused on demonstration versus written response. -These assessments provide educators with an alternative method to assess students' knowledge and abilities, but they must be used with a specific purpose in mind. In other words, they can assess products and processes. Examples: Oral presentations, physical assessments in P.E., dissecting a pig in Biology, etc.

Z-Scores

-Have a mean of 0 and a standard deviation of 1. -Tells us how many standard deviations someone is above or below the mean. -To calculate, subtract the mean from the raw score and divide by the standard deviation. Example: If we have a raw score of 85, a mean of 50, and a standard deviation of 10, we will calculate a z-score of 3.5.

Use of High-Stakes Testing Results

1. To determine yearly progress in meeting state-determined standards. 2. For promotion to the next grade level. 3. For awarding high school diplomas.

Paper-Pencil Assessment

A type of formal assessment in which students provide written responses to written items. These can include paper tests, scantrons, etc. Typical Items: Paragraph responses, problem-solving, etc.

Types of Standardized Assessments

Achievement, Scholastic Aptitude and Intelligence, Specific Aptitude, and School Readiness.

Types of Norm-Referenced Scores

Age/Grade Equivalent, Percentile, and Standard

Assessment as a Mechanism for Review

Assessments can serve to promote constant review of material, which aids in moving the material from short-term to long-term memory in order to be accessed in the future.

Types of Test Bias

Construct, Method, Item

Age/Grade Equivalent Scores Disadvantages

Parents and some educators often misinterpret these scores, especially when scores indicate the student is below expected age or grade level.

Mean

-The most well known statistic of summary. -The arithmetic average score.

Process

-When we don't have a product to assess, we assess this and the behaviors the student displays. -Giving an oral presentation, singing a song, demonstrating a tennis swing, etc. -Teachers may be interested in examining students' cognitive _____________ as well. Example of assessing cognitive _____________: if a teacher is trying to understand the thinking ___________ behind her students' knowledge of force and acceleration, she might assign an activity where students perform experiments to determine what objects will roll down an incline. The teacher would first have students make predictions, then complete the experiments. The student predictions allow the teacher to gauge their understanding of the scientific principles behind the experiment.

reliable; valid

An assessment can be ______________ but not ________________.

Scholastic Aptitude and Intelligence Assessment

Designed to assess a general capacity to learn and is used to predict future academic achievement. May assess what a student is presumed to have learned in the past. These assessments include vocabulary terms presumably encountered over the years and analogies intended to assess how well a student can recognize similarities amoung well-known relationshiops. These assessments allows for comparison of multiple students across schools and states. However, these tests can cause test anxiety which can cause students to not perform as well, thus resulting in an inaccurate reflection of the student's actual or potential academic abilities. Example: SAT

School Readiness Assessment

Designed to assess cognitive skills important to succeed in a typical kindergarten or first grade curriculum. These assessments are usually given 6 months before a child enters school. Provide info regarding developmental delays that need to be addressed immediately. These tests have been found to have a low correlation with students' actual academic performance beyond the first few months of school. They also usually only evaluate cognitive development (however, social and emotional development are critical to one's success in kindergarten and first grade).

Types of Reliability

Inter-rater, Test-retest, parallel forms, internal consistency

Statistics of Summary

Mean, median, and mode.

Standard Scores used in Education

Stanines and Z-Scores

Deviation

The amount an assessment score differs from a fixed value, such as the mean.

Criterion-Referenced Scores Disadvantage

The assessment of complex skills is difficult to determine through the use of one score on an assessment.

Validity

The extent to which an assessment accurately measures what it is intended to measure. For example, if an assessment intends to measure achievement and ability in a particular subject area but then measures concepts that are completely unrelated, the assessment is not _______.

Percentile Rank Disadvantages

These scores can sometimes overestimate differences of students with scores that fall near the mean of the normed group, and underestimate differences of students with scores that fall in the extreme lower or upper range of the scores.

Age/Grade Equivalent Scores

These scores compare students by age or grade. In other words, the scores indicate the approximate _________ level of students to whom an individual student's performance is most similar.

Standard Score Disadvantages

These scores might be confusing to understand without a basic knowledge of statistics.

Small Standard Deviation

When scores all tend to lie within the same region (all good scores or all bad scores; no real variety). Example: a class took a test and there's a small standard deviation because all of the scores clustered together right around the top, meaning almost all of the students got an A on the test. That could mean that the students all demonstrated mastery of the material, or it could mean that the test was just too easy. If all of the scores clumped together on the other end, it would mean that most of the students failed the test.

Reliability Coefficient

-A numerical index of reliability, typically ranging from 0 to 1. A number closer to 1 indicates high _______________. A low ____________ indicates more error in the assessment results, usually due to temporary factors/conditions. -Reliability is considered good or acceptable if the _______________________ is .80 or above. -This compares two sets of scores for a single assessment, or two scores from two tests that assess the same concept.

Criterion-Referenced Scoring

-A score on an assessment that specifically indicates what a student is capable of or what knowledge they possess. -Tell us how well a student performs against an objective or standard, as opposed to against another student. -Most appropriate when an educator wants to assess the specific concepts or skills a student has learned through classroom instruction. -These generally have a cut score, which determines success or failure based on an established percentage correct. Example: In order for a student to successfully demonstrate their knowledge of the math concepts we've discussed in class, they must answer at least 80% of the test questions correctly. A score of 80% or above demonstrates the student's knowledge of the subject area.

Factors that Impact Validity

-A student's reading ability -Student self-efficacy -Test anxiety levels

Product

-A tangible creation by a student that could take the form of a poster, drawing, invention, etc. -Performance assessments are useful in assessing these products in order to gauge a student's level of understanding or ability.

Test Bias

-A test that yields clear and systematic differences among the results of the test-takers. -Typically based on group membership of the test-takers, such as gender, race, and ethnicity. -A test is considered this when the scores of one group are significantly different and have higher predictive validity than another group.

Standard Deviation

-A useful measure of variability. -Measures the average deviation from the mean in standard units. -The mean and _____________________ can be used to divide the normal distribution into parts. The vertical line in the middle of the curve shows the mean, and the lines to either side reflect the __________________. -A small _____________ tells us that the scores are close together, and a large number tells us that they are spread apart more. Example: a set of classroom tests with a ______________ of 10 tells us that the individual scores were more similar than a set of classroom tests with a ___________ of 35.

Group Performance Assessments

-Allow teachers to assign complex projects and tasks that are best accomplished by many students. -These also allow students to assess their peers, which provide a different level of assessment for the teacher.

Construct

-An internal trait that cannot be directly observed but must be inferred from consistent behavior observed in people. Self-esteem, intelligence, and motivation are all examples of a ________________.

Accountability

-An obligation of educators to accept responsibility for students' performance on high-stakes assessments. -The NCLB Act of 2001 mandates some form of this in all schools for grades 3-8.

Standardized Assessments

-Assessments constructed by experts and published for use in many different schools and classrooms. These assessments are used in various contexts and serve multiple purposes. -Implemented in American classrooms in the early 20th century. -Required in most states due to the No Child Left Behind Act of 2001 (NCLB). -May consist of different types of items, such as: multiple-choice, true-false, matching, essay, and spoken items. They may also be paper-pencil or computer-based.

Norm-Referenced Scores

-Compares one student's performance on an assessment with the average performance of other peers. -Useful when educators want to make comparisons across large numbers of students or when making decisions on student placement (in K-12 schools or college) and grade advancement. Examples: SAT, ACT, GRE

Types of Validity

-Content -Predictive -Construct

Guidelines for High-Stakes Testing

-Created by national organizations, such as the American Psychological Association, in an effort to promote fairness and avoid unintended consequences of these types of tests. These include: 1. Decisions on a student's continued education should not be based on the results of one single test, but a comprehensive set of exams and performance assessments. 2. If the results of a single assessment are used to determine a student's continued education (e.g., grade promotion or graduation), there should be evidence that the test addresses the specific content and skills that students have had an opportunity to learn. 3. School districts and states should only use test results as they are clearly defined. 4. Special accommodations should be made for students with limited English proficiency and students with disabilities.

Conditions that Impact Reliability

-Day to day changes in the student (energy levels, motivation, emotions, hunger, etc.). -Physical Environment (classroom temp, outside noises and distractions, etc.). -Administration of the Assessment (test instructions, differences in how the teacher responds to questions about the test, etc.). -Test length (generally the longer the test, the lower the reliability). -Subjectivity of the test scorer.

Cumulative Percentages

-Determine placement among a group of scores. They DO NOT determine how much greater or less one score is than another. Instead, they are ranked on an ordinal scale and are used to determine order or rank only. Specifically, this means that the highest scores in the group will be the top score no matter what the score is. -Ranked on a scale from 0%-100%. -Changing raw scores to these is one way to standardize raw scores within a certain population. Example: A student received a raw test score of 85. If 85 were the highest grade on the test, the cumulative percentage would be 100%. Since the student scored at the 100th percentile, she did better than or the same as everyone else in the class. That means that everyone else made either an 85 or lower on the test.

Parallel Forms Reliability

-Determined by comparing two different assessments that were constructed using the same content domain. Example: if my science teacher created an assessment with 100 questions that measure the same science content, she would divide the test up into two versions with 50 questions each and then give us two versions of the test. She would use a score from version 1 and a score from version 2 to assess ________________________________.

How to Promote Standardized Testing in the Classroom

-Educators can remind students of the value of test scores for tracking academic progress over time. -Educators can encourage students to do well, but also remind them that their skills and knowledge are also assessed in other ways. -Educators should acknowledge the shortfalls of these tests but also promote the benefits. -Educators should allow students to practice test-taking skills that are needed for this type of testing (such as giving timed assignments) in order to decrease test anxiety.

Uses for Standardized Tests

-Evaluate a student's understanding/knowledge in a particular area. -Evaluating admissions potential. -Assess language and technical skill proficiency. -Often used to determine psychological services. -Sometimes used to evaluate aptitude for a career in the armed forces.

Test-takers with cognitive or academic difficulties

-Have poor listening, reading, and writing skills. -Perform inconsistently on tests due to off-task behaviors, such as daydreaming and doodling. -Have higher than average test anxiety.

Disadvantages of Standardized Tests in 9th-12th Grade

-Increased skepticism regarding the usefulness and validity of these types of tests. -Decreased motivation to perform well on tests, and many students in this grade range have deemed themselves "poor test takers" and simply stop trying.

Percentile Rank

-Indicate the percentage of peers in the norm group with raw scores less than or equal to a specific student's raw score. -Ranked on a scale from 0%-100%.

Disadvantages of High-Stakes Testing

-May lead to inaccurate inferences of student performance due to non-test factors such as anxiety and motivation. -Teachers are burdened with more standards to teach and end up teaching to the tests. -Does not assess higher-level critical thinking skills. -Since each state can determine standards, different test criteria may lead to different overall conclusions on student and school achievement/performance. -Emphasis is placed on punishing lower-performing schools and personnel and not enough emphasis on helping those schools improve.

Measurement of Validity

-Measured using a coefficient. -Typically two scores from two assessments or measures are calculated to determine a number between 0 and 1. -Higher coefficients=higher validity. -Most assessments with a coefficient of .60 and above are considered acceptable or highly valid.

Static Assessments

-Most common form of performance and paper-pencil based assessments. -Focus on student's existing abilities and knowledge.

Large Standard Deviation

-Most teachers strive for this because it means that the scores on a test varied across the grade range. -This indicates that a few students did really well, a few students failed, and a lot of the students were somewhere in the middle. -This allows teachers to know that he/she taught the material correctly (because at least some of the students got an A) and the test was neither too difficult, nor too easy.

Cultural Bias

-Most test biases are considered this. -The extent to which a test offends or penalizes some students based on their ethnicity, gender, or SES.

Purposes of Assessment

-Motivators -Mechanisms for Review -Feedback

Construct Bias

-Occurs when the construct measured yields significantly different results for test-takers from the original culture for which the test was developed and test-takers from a new culture. Example: basing an IQ test on items from American culture would create bias against test-takers from another culture.

Formative Assessment

-Ongoing assessments. -Ungraded testing used before or during instruction to aid in planning and diagnosis. -Reviews and observations used to evaluate students in the classroom. Teachers use these to continually improve instructional methods and curriculum. Student feedback is also a type of this assessment. Examples: quizzes, lab reports, and chapter exams.

Test-takers with social or behavioral difficulties

-Perform inconsistently on tests due to off-task behaviors. -Have lower than average motivation for testing.

Restricted Performance

-Performances that are relatively short in duration and involve a one-time performance of a particular skill or activity. Example: a PE instructor asks her students to perform a push-up. She wants to assess their form for this one particular exercise.

Item Bias

-Refers to problems that occur with individual items on the assessment. -May occur because of poor use of grammar, choice of cultural phrases, and poorly written assessment items. Example: the use of the phrase "the last straw" to indicate the thing that makes one lose control, would be difficult for a test-taker from a different culture to interpret. The incorrect interpretation of culturally biased phrases within test items would lead to inaccurate test results.

Disadvantages of Standardized Tests in Kindergarten-2nd Grade

-Students have short attention spans, and there is a large amount of variability in their attention span. -Very little motivation to do well due to students' inability to understand the purpose of the test. -Test results are also inconsistent among this grade range.

Disadvantages of Standardized Tests in 6th-8th Grade

-Students this age tend to have more test anxiety. -Students start becoming skeptical about the value of these tests.

Dynamic Assessments

-Systematically examine how a student's knowledge or reasoning may change as the result of learning or performing specific tasks. -This concept is consistent with Vygotsky's concept of Zone of Proximal Development and provides teachers with info about what students are able to accomplish with appropriate structure and guidance.

Disadvantages of Standardized Tests in 3rd-5th Grade

-Test scores are interpreted as "end-all-be-all" for evidence of academic ability, which causes a lot of stress and anxiety. -Students in these grades still have a wide range of abilities and levels of understanding, which leads to wide variability in scores.

Summative Assessment

-Testing that follows instruction and assesses achievement. -Used to evaluate the effectiveness of instruction at the end of the academic year or end of the class. -These assessments allow educators to evaluate students' comprehensive competency and final achievement in the subject or discipline. Examples: final exams, statewide or national assessments, EOY Envisions test, etc.

Advantages of High-Stakes Testing

-Tests are based on clearly defined standards and provide important info on students' performance growth and declines. -Tests can highlight gaps in an individual student's knowledge, classroom achievement gaps, or school achievement gaps. -Tests may also motivate students to improve their performance, especially when the results are tied to diplomas and grade promotion.

Predictive Validity

-The extent to which a score on an assessment predicts future performance. -Norm-referenced ability tests, such as the SAT, GRE, etc. are used to predict success in certain domains at a later point in time (e.g., the SAT predicts success in higher education). -To determine the __________________ of an assessment, companies often administer a test to a group of people, and then a few years or months later measure the same group's success or competence in the behavior being predicted. A validity coefficient is then calculated, and higher coefficients indicate greater __________________________.

Construct Validity

-The extent to which an assessment accurately measures what is supposed to be measured. -Answers the question, "are we actually measuring what we think we are measuring?"

Content Validity

-The extent to which an assessment represents all facets of tasks within the domain being assessed. -Answers the question, "Does the assessment cover a representative sample of the content that should be assessed?" -Educators should strive for this type of validity, especially for summative assessment purposes. -This is increased when assessments require students to make use of as much of their classroom learning as possible. Example: If you gave your students an end-of-the-year cumulative exam, but the test only covered material presented in the last 3 weeks, the exam would have low ___________________. The entire semester worth of material would not be represented on the exam.

Reliability

-The extent to which an assessment yields consistent information about the knowledge, skills, or abilities being assessed. -A _____________ assessment is replicable, meaning it will produce consistent scores or observations of student performance. -An assessment is considered ____________ if the same results are yielded each time the test is administered. Example: A student's singing performance should result in similar scores from three different teachers. If one teacher gives the student a score of 10 out of 10, and another teacher gives the student a score of 2 out of 10, the scores are not considered ____________________.

Bell Curve

-The most common type of distribution for a variable. The shape of this distribution is a large, rounded peak tapering away at each end. -When you have a graph that looks like this, you know you have a normal distribution. -It means that a lot of students' scores fall in the middle (indicated by the big bump), while a few students did really well and a few students did poorly.

No Child Left Behind Act of 2001

-The most recent and well-known establishment of standardized high-stakes testing. -Requires states to develop standards and assessments for basic skills (such as reading and math) and assess these skills annually. -Federal school funding is tied to these assessment results.

High-Stakes Testing

-The practice of basing major decisions on individual student performance, school performance, and school personnel on a single assessment. -Places pressure on schools and teachers to produce high test scores each year or face consequences such as reduced funding, salary restrictions, and personnel termination. -Administrators and teachers are help accountable for the students' performance in their classrooms and schools.

Mode

-The score obtained by the most people in the group, or the most common score among a group. -Generally used for examples when scores are not in numerical form. In other words, its good to use when the data involved are categorical instead of numerical. Example: Car insurance prices are based on a lot of different things such as gender, age, and also the color of your car. Insurance is more expensive for red cars because red cars get into more accidents than any other color. So red is the _______ car accident color. In this situation, it wouldn't make sense to use the mean or the median because numbers aren't involved.

Median

-The score that falls exactly in the middle, such that half of the people had higher scores, and half of the people had lower scores. -Better to use this over mean when you have extreme scores on one end or the other (i.e., a lot of outliers).

Standard Score

-The score that indicates how far a student's performance is from the mean with respect to the normal distribution of scores (also referred to as standard deviation units). -Useful when describing a student's performance compared to a larger group. -Calculated by subtracting the mean from the raw score and dividing by standard deviation.

Raw Score Disadvantages

-They may be difficult to interpret without knowledge of how one raw score compares to a norm group. -They may also be difficult to understand without comparing them to specific criteria.

Adaptive Testing

-This type of testing is computer-based. -When the students' performance on items at the beginning of the test determines the next items to be presented.

Internal Consistency Reliability

-Used to assess the consistency of scores across items within a single test. Example: If my science teacher wants to test the __________________ of her test questions on the scientific method, she would include multiple questions on the same concept. High _________ would result in all of the scientific method questions to be answered similarly. However, if students' answers to those questions were inconsistent, then _____________ is low.

Test-retest Reliability

-Used to assess the consistency of scores of an assessment from one time to another. -The construct to be measured does not change--only the time at which the assessment is administered changes. -Best used to assess things that are stable over time, such as intelligence. -Typically higher when little time has passed between administrations of assessments. Example: If I was given a science test today and then given the same test next week. Those scores could be used to determine ______________________.

Inter-rater Reliability

-Used to assess the degree to which different raters/observers give consistent estimates or scores. -Answers the question, "do different people score students' performances similarly? Example: If I performed in front of three teachers individually, _____________________ would indicate each teacher rated/graded me similarly.

Stanines

-Used to represent standardized test results by ranking student performance based on an equal interval scale of 1-9. -These have a mean of 5 and a standard deviation of 2. -A ranking of 5 is average, 6 is slightly above average, and 4 is slightly below average.

Skewed Distribution

-When a distribution is not normal and is instead weighted heavily on one side or the other. -Less common in classrooms and usually less desirable.

Outlier

-When a score is extremely different from the rest of the scores in a distribution. -If this exists in your data, it will have a huge effect on the mean.

Extended Performance

-When assessing this, teachers want to determine what the students are capable of doing to long periods of time. -Allows teachers to assess the development at certain points over time. -Also allows time for feedback and the opportunity for students to edit their work. -Can be very time consuming.

How to Increase Reliability of Assessments

1. Give several similar tasks or questions in an assessment to look for consistency of student performance. 2. Define each task clearly so temporary factors, such as test instruction, does not impact performance. 3. If possible, avoid assessing students' learning and performance when they are sick or there are external factors such as uncontrollable noise. 4. Identify specific concrete criteria and use a rubric to evaluate student performance.

Disadvantages of Standardized Tests

1. Items are not parallel with typical classroom skills and behaviors. Since the questions have to be generalizable to an entire population, most items assess general knowledge and understanding. 2. Since general knowledge is assessed, teachers cannot used these test results to inform their individual instruction methods. If recommendations are made, teachers may begin to "teach to the test" as opposed to teaching what is in the curriculum or based on the needs of their classroom. 3. The items do not assess higher-level thinking skills. 4. The scores are greatly influenced by non-academic factors, such as fatigue and attention.

Advantages of Standardized Tests

1. Practical, easy to administer, and don't take up a lot of time. 2. Provide quantifiable results. By quantifying students' achievements, educators can identify proficiency levels and more easily identify students in need of remediation or advancement. 3. Scored via computer, which frees up time for the teacher. 4. Since scoring is done by a computer, it is objective and not subject to educators bias or emotions. 5. Allows educators to compare scores to students in the same school and across schools. It provides data on individual students' abilities, and on the school as a whole, which makes areas of school-wide weaknesses and strengths more easily identifiable. 6. Provides a longitudinal report of student progress. Over time, teachers can see a trend of growth or decline and quickly respond to the student's educational needs.

Considerations when Choosing Appropriate Performance Assessment Tasks

1. Product vs. Process 2. Individual vs. Group Performance 3. Restricted vs. Extended Performance 4. Static vs. Dynamic Assessment

Guidelines for Performance Assessments

1. Tasks should be defined clearly and unambiguously. 2. The teacher must specify the scoring criteria in advance. 3. The teacher must be sure to standardize administration procedures as much as possible. 4. The teacher should encourage students to ask questions when tasks are not clear.

Guidelines for Choosing Standardized Assessments

1. The school should choose an assessment that has high validity for the particular purpose of testing. 2. The school should make sure the group of students used to 'norm' the assessment are similar to the population of the school. 3. The school should take the students' age and developmental level into account before administering any standardized assessments.

What the Normal Distribution Shows

1. The variability or spread of the scores. 2. The midpoint of the normal distribution (the mean of the scores). Example: If we had the following raw scores: 57, 76, 89, 92, 95, the variability would range from 57 (the lowest score) to 95 (the highest score). Plotting these scores along a normal distribution would show us the variability. The midpoint distribution is also illustrated.

Norm Group

A reference group that is used to compare one score against similar others' scores.

Informal Assessment

Assessment procedures without rigid guideline, used for obtaining information through task analysis, inventories, projects, portfolios, teacher made assessments, etc. Offers important insight into a student's misconceptions and abilities (or inabilities) that might not be represented accurately through other formal assessments. Example: Walking around the classroom during math centers and assessing which students work well with others.

Achievement Assessment

Designed to assess how much students have learned from classroom instruction. Assessment items typically reflect common curriculum used throughout schools across the state or country (e.g., a history assessment may contain items that focus on national history rather than history distinct to a particular state or county. Provide info about how much a student has learned about a subject, and also provide info on how well students in one classroom compare to other students. They also provide a way to track student progress over time. These assessments do not indicate how much a student has learned for a particular area within the subject (e.g., the assessment may indicate a relative understanding of math, but not if a student knows how to use a particular equation taught in the classroom.

Specific Aptitude Assessment

Designed to predict future ability to succeed in a particular content domain. Often used to select student for specific instructional programs or remediation programs. May also be used for counseling students about future educational plans and career choices. Usually, one's ability to learn in a specific discipline is stable, and therefore, these types of assessments are an effective way to identify academic tendencies and weaknesses. However, these assessments only encourage specific skill development in a few areas, rather than encouraging the development of skills in a wide range of disciplines and abilities. Example: Assessment for one's aptitude for a career in the armed forces.

Formal Assessment

Preplanned, systematic attempt to ascertain what students have learned. Most assessments in school are _____________. Typically, these assessments are used in combination with goals and objectives set fourth in the beginning of the school year. These assessments allow students to prepare ahead of time. Two Types: Paper-Pencil Assessments and Performance Assessments. Example: EOY Envisions Test.

Assessment

Procedures used to obtain information about student performance. Observing a sample of student's behavior and drawing inferences. This term promotes a more positive connotation that terms such as "test" or "exam" that often correlate to failure and anxiety. These are only useful depending on how well they are aligned with the circumstances in which they are used. Includes measurement, but is broader because it includes all kinds of ways to sample and observe students' skills, knowledge, and abilities.

Raw Score

The score based solely on the number of correctly answered items on an assessment. It will tell you how many questions the student got right, but not much more beyond that.

Assessment as a Motivator

Students study and learn more material when they know they will be tested on it or held accountable.

Assessment and Feedback

Teachers get feedback about students' knowledge, and also the effectiveness of instruction. For students, assessments provide feedback about areas in which they may need to focus, or areas in which they are proficient.

68-95-99.7 Rule

This rule states that for a normal distribution, almost all values lie within one, two, or three standard deviations from the mean. Specifically, approximately 68% of all values lie within one standard deviation of the mean, approximately 95% of all values lie within two standard deviations of the mean, and approximately 99.7% of all values lie within three standard deviations of the mean.


Kaugnay na mga set ng pag-aaral

Module 9: Interprofessional Collaborative teamwork

View Set

Conduit and Panel Boards Mid Term

View Set

Multiple Choice Exam 2 HIST 1378

View Set

Chapter 67 Management of Patients with Cerebrovascular Disorders

View Set