TE 319- Test 3

Ace your homework & exams now with Quizwiz!

Converting rubric scores to grades

We have to be very careful about how we transform rubric scores into grades. While this is a most complex problem, a few brief suggestions can serve as a point of departure. · Do not equate rubric scores to strict percentages. Ex: 1 - 5 rating scale with 3 = 50% It is fairly clear to see that many students will fail! · Two alternate possibilities: o Logic Rule: A=no more than 10% of scores lower than 4 with at least 40% 5s o Make reasonable decisions based on rubric descriptors: 4+ to 5 = A 3+ to 4 = B 2+ to 3 = C 2 = D 1 = F

Which of the following men was U.S. president during the twentieth century? A. Franklin Roosevelt B. Jimmy Carter C. Dwight Eisenhower D. All of the above

Validity issues because it has "all of the above" as an answer choice.

If a classroom teacher actually computed a Knuder-Richardson coefficient for a final exam, this would be an example of an: A. stability reliability coefficient B. internal consistency reliability coefficient C. content validity coefficient D. construct validity coefficient

"An" shows us that it has to be answer =B. This is the only answer that starts with a vowel.

Item-Writing Guidelines for Matching Items

1. Employ homogeneous lists. (Break up word boxes into lists: date options, presidents name options, etc. -don't just mix them all together) 2. Use relatively brief lists, placing the shorter words or phrases on the right. 3. Employ more responses than premises. 4. Order the responses logically. (Example: alphabetical order) 5. Describe the basis for matching and the number of times responses may be used. 6. Place all premises and responses for an item on a single page/screen. (Taking a matching test, don't have the word box on one page, and the fill in the blank on another page)

Item-Writing Guidelines for Binary-Choice Items

1. Phrase items so that a superficial analysis by the student suggests a wrong answer. 2. Rarely use negative statements, and never use double negatives. 3. Include only one concept in each statement. 4. Have an appropriately equal number of items representing the two categories being tested. (Have about half of the answers true and half of the answers false) 5. Keep item length similar for both categories being tested. (Example: Don't want all the one line answers to be true and all the two line answers to be false)

Item-Writing Guidelines for Multiple Binary-Choice Items

1. Separate item clusters vividly from one another. 2. Make certain that each item meshes well with the cluster's stimulus material. (In the example below, all of the items in that cluster relate to the same topic)

Five General Item-Writing Commandments

1. Thou shall not provide opaque directions to students regarding how to respond to assessment instructions. (Make directions clear) 2. Thou shall not employ ambiguous statements in your assessment items. 3. Thou shall not provide students with unintentional clues regarding appropriate responses. 4. Thou shall not employ complex syntax in your assessment items. 5. Thou shall not use vocabulary that is more advanced than required. (On math test, do not need tough wording on test, because we are only scoring math...not vocabulary)

Example of guidelines for multiple choice items Suppose that a dozen of your students completed a 10-item multiple-choice test and earned the following number of correct scores: 5, 6, 7, 7, 7, 7, 8, 8, 8, 8, 8, 10

9. The median for your students' scores is 7.5 (True) 10. The mode for the set of scores is 8.9 (False) 11. The range of the students' scores is 5.0 (True) 12. The median is different than the mean (False)

Analytic Rubric

Breaks information down piece by piece. Student gets scored on each individual part. Better to grade analytically.

True? False? One of the most heinous transgressions in the genesis of assessment items is the item's incorporation of obfuscative verbiage.

Difficult words

True? Or False? A classroom test that measures appropriate knowledge and/or skills is likely to be reliable because of the strong link between validity and reliability.

Formatting: True or false is kind of awkward. Content: and/or (can't have that in a true or false question)

Directions: On the line to the left of each state in Column A write the letter of the city from the list in Column B that is the state's capital. Each city in Column B can be used only once. Column A: 1. Oregon 2. Florida 3. California 4. Washington 5. Kansas Column B: a. Bismarck b. Tallahassee c. Los Angeles d. Salem e. Topeka f. Sacramento g. Olympia h. Seattle

It is good that there is more in Column B than Column A. It is bad that there is only one city mentioned in each state. So even if the student does not really know the state capital, with process of elimination, they can solve the answers if they know what state each city is in.

Rubrics

Powerful tools for assessing and communicating expectations

Rubrics: Session Description

Rubrics provide means to more objectively assess student achievement. However, the real power of these tools may lie in communication. Well-constructed rubrics communicate to students and parents clear target criteria, levels of proficiency, and more objective means for scoring student achievement. You may even notice fewer arguments about grades!

Correct? Or Incorrect? When teachers assess their students, it is imperative they understand the content standard on which the test is based.

They- We don't know if it is talking about the teachers or students when it says "they". Depending on who the questions is referencing, changes what the answer is referring to.

True? or False? Test items should never be constructed that fail to display a decisive absence of elements which would have a negative impact on students because of their gender or ethnicity.

This is a triple negative

True? Or False? Having undertaken a variety of proactive steps to forestall the inclusion of items that might, due to an item's content, have an unwarrantedly adverse impact on students because of any student's personal characteristics, the test-developers then, based on sound methodological guidelines and should carry out a series of empirically based bias-detection studies.

This is poorly written. It does not make a statement. It is also a bad question for true or false.

Which of the following isn't an example of how one might collect construct-related evidence of validity? A. intervention studies B. test-retest studies C. differential-population studies D. related-measures studies

This sets the student up for missing the question because of wording. The question says "isn't", but it should say "not" and have it underlined and/or bolded. This is a poorly written question because of its presentation.

A set of properly constructed binary-choice items will: A. Typically contain a substantially greater proportion of items representing one of the two alternatives available to students. B. Incorporate qualities that permit students to immediately recognize the category into which each item falls, even based on only superficial analyses. C. Vary the length of items representing each of the two binary-option categories so that, without exception, shorter items represent one category while longer items represent the other. D. Contain no items in which more than a single concept is incorporated in each of the items.

Too long to read. Question should be longer than the answers.

Utilizing rubrics and what these tools will NOT solve- when argument arise...

o Hopefully, these tools will help lessen the occurrence but they will not eliminate them. o Remember, you are a trained expert and your professional opinion is a result of education, experience, and ongoing development. Don't be afraid to say so!

Holistic Rubric

overall. General overall vibe. -Our cdis paper will be graded holistically

Areas to consider when developing rubrics

· Content and coverage—What will you look for? What will count? Does it include what is really important? VALIDITY • Clarity—extent to which other teachers, students, others interpret the score the same...RELIABILITY • Practicality—easy to use, easy to understand, not too cumbersome, logical tool to measure the identified target • Technical quality—difficult but necessary questions to ask o Do the performance criteria adequately measure the goal being assessed? o Do the ratings actually represent what students can/cannot do? o Is specific feedback provided that is understandable to students?

Two main types of rubrics

· Holistic—single score or rating for an entire product or performance · Analytic Trait—divides product or performance into essential traits, dimensions, and/or subskills (better for complex, multidimensional targets, provides more-specific feedback)

Developing rubrics

· List most important criteria, performance elements, and/or sub-skills. · Determine meaningful achievement indicators and/or scoring ratings. o What is the range of qualitative degrees of achievement? o You need enough qualitative degrees to meaningfully distinguish qualities. o If you wish to determine competence for example: Ratings 1-4...where 3 = standard and 4 = exceeds standard o Even number—forces response rather than tendency to gravitate toward mean Recommend 3-6 depending upon the task being assessed

Utilizing rubrics and what these tools will NOT solve

· Your assessment/grading system must be clearly articulated to students and parents prior to the grading period and do not change in the middle of the grading period. · You must share your assessment/grading plan with your administrators prior to the grading period and do not change in the middle of the grading period. o Seek their support for your system and hash out any differences of opinion. o It is critical that you work through any difficulties to avoid embarrassing "backtracking" midstream. o Whining and complaining about lack of support when you have not done your job is a waste of energy and oxygen. · Rubrics will not provide a backbone for teachers that have difficulty communicating sometimes critical feedback to students regarding their classroom performance. Fight the urge to regress toward the mean by picking the middle number(s). · Rubrics are only tools, you are still the data collector/assessor/evaluator. Remember... Whenever a human being is involved there are risks of error, objectivity, and bias.

Purposes for using scoring rubrics

• Clarify targets of instruction...especially those that are complex • Effort to create valid and reliable tools to more objectively assess student achievement • Effort to improve motivation and achievement by communicating expectations, targets, and levels of achievement to students and parents

Two main components of rubrics

• Clearly identified targets of instruction or task criteria • Clearly identified and logical performance/scoring levels

Benefits for Teachers

• Consistency of scoring—predetermined criteria and achievement levels help to ensure... o you are focused on the most important components of a task o reduced subjectivity o increased consistency between scorers/classrooms/multiple evaluators • Improved instruction—clarifies goals and provides focus for instruction • Creates more objective context for relating assessments and grading procedures

Benefits for Students

• Shared vocabulary • Clearer understanding of important criteria • Clearer understanding of component sub-skills/concepts of complex tasks • Defined levels of achievement • Less subjective feedback • Fewer surprises at grading time


Related study sets

Tufts DPT - Manual Muscle Testing Exam 2

View Set

FINC 3310 Leavell - All Possible Exams 1, 2, 3, & 4 - Final Exam Preparation

View Set

Accumulated Practice! Arithmetic/Elem. Alg./CL Math

View Set

2016 AP Lang Practice Exam: Multiple Choice

View Set