Personality exam #1- Chapter 3
A measure that assesses several dimensions of personality is called an inventory.
True
According to the empirical approach to test development, it does not matter at all what the items look like.
True
Error:
Random influences that are incorporated in measurements
When are different methods used? Rational vs. empirical
Rational approach: - Usually in connection with theory building Empirical approach: - Usually used in connection with practical needs
Operational definition
Defining a concept using the concrete events through which its measured
If a researcher demonstrates that her measure of loneliness is not correlated with a measure of intelligence, she has provided evidence for the _________ validity of this measure.
Discriminant
Sometimes researchers actually try to reduce _________ validity by obscuring the true purpose of a measure.
Face
The theoretical approach to assessment often results in measures that have a high degree of _________ validity.
Face
A self-report scale can measure only one aspect of personality.
False
Internal reliability is the extent to which raters agree with one another.
False
Construct validity
Most important kind - The measure (the assessment device) reflects the construct (the conceptual quality) that the psychologist has in mind - Any trait quality= construct
When a measure comes from someone other than the person being observed, this is called?
Observer rating
Barnum/Forer effect:
People believe what you tell them about themselves when: - The info is vague and complimentary - They trust you - They believe the analysis applies specifically to them
The attempt to create a good impression on a personality measure is called:
Social desirability
Why is face validity regarded as a convenience by researchers?
Some believe it is easier to respond to face-valid instruments
Reliability and validity
Something can be reliable WITHOUT being valid = BUT, cannot be valid UNLESS reliable
Which of the following is NOT suggested as a way to deal with social desirability?
Tell participants they will not receive credit if they are found to be deceptive.
A high correlation between scores on the same test administered at two different points in time demonstrates high:
Test-retest reliability
Split-half reliability refers to the:
correlation between the items comprising the first and second halves of a test.
The scale construction method that uses predefined groups to select items is called the:
criterion keying approach.
The criterion keying approach emphasizes the importance of a scale's ability to accurately predict whether a person is a member of some group.
True
When participant responses to a test are consistent across time, the test is said to have strong test-retest reliability.
True
Response sets are:
biased ways in which people respond to personality measures
A high correlation between an assessment device and an external standard of comparison is an indication of _________ validity.
Criterion
_________ validity is generally seen as the most important means of establishing construct validity.
Criterion
Empirical (data-based) approach:
- Empirically driven from many items - Use statistical methods to select items based on ability of items to differentiate group membership - Criterion keying
Rational (theoretical) approach:
- Start with theory and conceptualization - Create items to fit conceptualization - Test validity and reliability
Response set:
A biased orientation to answering
Subjective measure:
A measure incorporating personal interpretation
Objective measure:
A measure that incorporates no interpretation.
Inventory:
A personality test measuring several aspects of personality on distinct subscales.
Observer ratings:
An assessment in which someone else produces information about the person being assessed.
Split-half reliability:
Assessing internal consistency among responses to items of a measure by splitting the items into halves, then correlating them.
The process of measuring personality is called:
Assessment
Reliability:
Consistency across repeated measurements
Inter-rater reliability
Consistency between raters or observers - Inter-observer agreement - Measuring device: rater
Test-retest reliability
Consistency of same test over time/across time - Measuring device: entire test
Reliability
Consistency or repeatability of a measure= all measures contain true score and error - Internal consistency - Inter-rater reliability - Test-retest reliability
Internal Consistency
Consistency within a test - are the individual items or observations consistent with one another? ex. split-half reliability - Measuring device: test item
If an assessment measures the intended conceptual characteristics, it has demonstrated:
Construct validity
The most all-encompassing and, thus, most important kind of validity is:
Construct validity
If a scale is correlated with other scales that measure similar concepts it is said to have:
Convergent validity
.Reliability within a set of observations measuring the same aspect of personality is referred to as:
Internal consistency
A measure that assesses several dimensions of personality is called a(n):
Inventory
Which of the following is true about reliability and validity?
It is possible for a measure to be reliable but not valid
Which of the following is NOT true about the empirical approach to developing measures?
It relies on theory
The reliability of an observation refers to:
Its consistency across repeated observations
Objective
Measure of concrete reality that involves no interpretation -Example, counts of the time a person touches another in an interpersonal interaction
Construct validity:
The accuracy with which a measure reflects underlying concept
Face validity
The assessment device appears on "its face" to be measuring the construct it was intended to measure - An assessment appears effective in terms of its stated aims
Operational definition:
The defining of a concept by the concrete events through which it is measured (or manipulated).
Inter-rater reliability:
The degree of agreement between observers of the same events
Validity:
The degree to which a measure actually measures what it is intended to measure
Convergent validity:
The degree to which a measure relates to other characteristics that are conceptually similar to what it's supposed to assess.
Discriminant validity:
The degree to which a scale does not measure unintended qualities
Criterion Validity:
The degree to which the measure correlates with a separate criterion reflecting the same concept.
Predictive validity:
The degree to which the measure predicts other variables it should predict.
Criterion keying:
The developing of a test by seeing which items distinguish between groups
Social desirability:
The response set of tending to portray oneself favorably.
Acquiescence:
The response set of tending to say "yes" (agree) in response to any question
Face validity:
The scale "looks" as if it measures what its supposed to measure
Test-retest reliability:
The stability of measurements across time
Rational approach (to scale development):
The use of a theory to decide what you want to measure, then deciding how to measure it.
Empirical approach:
The use of data instead of theory to decide what should go into the measure
Inter-rater reliability is most applicable to observational measures.
True
Making multiple observations generally improves reliability.
True
Observer ratings can involve interviews.
True
Researchers have been able to learn about personality by studying people's bedrooms.
True
Which of the following is NOT required by the rational approach to developing personality measures?
demonstrating the measure has never been administered before
A measure is high in validity when:
the operational definition closely matches the conceptual definition.
A response set in which participants simply tend to answer "yes" to all questions is known as:
Acquiescence
Which of the following is a potential source of error in a measure?
- the way an item is phrased - variations in an observer's attention - distractors present when observations are made
Observer ratings can be based on
-Interviews in which people talk about themselves -Direct observations of overt action -Interviews in which people talk about something other than themselves
Validity
Accuracy; is the measure assessing what it is intended to assess? - Is our measure of extraversion actually capturing extraversion (and NOT something else)? - Is our operational definition of extraversion accurate?
Internal reliability
Agreement among responses made to the items of a measure
Acquiescence is a response set that can not be resolved.
False
Discriminant validity can be established more quickly than other types of validity.
False
If a trained observer rates how tired a person appears after a test, that observer is making an objective rating.
False
If observer ratings involve interviews, participants must talk about themselves to the interviewer.
False
If we know that a scale is reliable, we also know that it is valid.
False
Implicit assessment involves asking participants directly about themselves.
False
Item response theory, because it is such a new technique, has only been applied to a narrow range of assessments thus far.
False
Most researchers think that face validity is among the most important types of validity.
False
Once a personality measure has been validated, it need never be revised and/or re- validated.
False
Split-half reliability refers to how consistent responses are across time.
False
Subjective
Measure that involves interpretation Ex. evaluation of facial expressions for sings of hostility
Implicit assessment:
Measuring associaitions between the sense of self and aspects of personality that are implicit (hard to introspect about)
Implicit Assessment (IAT)
Measuring patterns of association within the self that are not open to interospection
Response sets
Readiness to answer in a particular way - Acquiescence: tendency to say "YES" - Social desirability: to respond in a manner that will be viewed favorable by others - BIASES
If there is a high degree of correlation among the items on a measure, it is said to have high:
Reliability
Discriminant validity
Tests whether concepts or measurements that are NOT supposed to be related, are in fact, UNRELATED
Convergent validity
The evidence "converges" on the construct you're interested in, even though any single finding itself won't clearly reflect the construct - Refers to the degree to which two measures of constructs that theoretically should be related, are in fact related
Criterion validity
The extent to which a measure is related to an outcome - Comparison between the measure in question and an outcome assessed at the same time
Assessment:
The measuring of personality
An operational definition is a description of some kind of physical event.
True
When a measure comes from someone other than the person being observed, this is called:
observer rating.