Categorization

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

Benefits of Categorization

1.) It reduces the complexity of our environment, thinking about color, their are 7 million but we put them into categories because we don't have to think about every shade. 2.) Its the way we identify objects in our world 3.) It reduces our need for constant learning 4.) It helps think about what the appropriate behaviors will be or actions. 5.) It allows us to understand relationships amongst categories, like mammals and all the subcategories.

Problems with Categorization

1.) we exaggerate within group similarities, especially with social categories like all women are alike. 2.) we tend to exaggerate between group differences.

Feature comparison model: 2 step verification process

1.) we look at total overlap characteristic and defining features. A.) you can have lots of overlap of characteristics and defining features- respond true for sentence B.) intermediate overlap- go to step 2. C.) very little overlap- respond false to sentence. 2.) only compare defining features and if they match you respond true, if they don't match then respond false.

McNamara: spreading activation

Bread- butter-popcorn. The word bread should activate butter fairly quickly, but would it activate popcorn? Four conditions: non-word: nart unrelated: nurse closely related: butter indirectly related: bread. The closely related condition had the fastest reaction time and the indirectly related condition was second fastest.

Rosh et al. Sentence picture verification task

Subjects saw a picture of a chair and a sentence paired with it. Super = this is furniture, basic = this is a chair, and sub = this is a dining room chair. They were asked to say true or false. Subjects were fastest when the pic had a sentence of basic level category.

Tanaka and Taylor, sentence-picture verification task

Subjects were bird experts and dog experts. Bird experts looked at pics of birds and dogs, their basic and sub levels were fastest for birds and only basic level was fastest for dogs. Vise versa for the dog experts. for general knowledge we categorize at basic but with more knowledge like expertise we use sub categorization.

Frequency theory

The more frequently category members have relevant attributes, the easier it is to determine the relevant attributes.

Schema

a cluster of knowledge that represents a general procedure, object, event, sequence of events, or social situation. seen in problem solving, reading, expertise, social psych and stereotypes. Key idea: it is structured, their are slots for expected info, and also their may be default values.

Reversal of the category size effect

a higher level category size leads to a longer reaction time. sometimes you find the opposite, ex: a monkey is a mammal vs. a monkey is an animal, mammal is a smaller category so it should be faster but animal is actually faster.

Leveling/ flattening

loss of details, in the story things like names or things that dont fit with someones schema would be forgotten.

Concept Identification task

a subject is shown a set of cards with different objects on them. The cards are presented to the subject one at a time in a random order and the subject is to determine whether the card belongs to the class to be learned. After each presentation, the subject is given feedback on the correctness of his response. Typically, the subject is told which attributes of the objects on the card (e.g., the number of objects, the shape of the objects, and the color of the objects) are potentially relevant. The trials continue until the subject makes no errors.

Prototype theory

an average of all the members of a category, we have an idea of what the average dog is, each time we encounter a new one we add to our average. each category has a stored prototype in memory, the new thing is compared to prototypes for different categories with most similar prototype. Requires more work in category development.

Natural Categories: Hierarchical organization Superordinate categories

are at the top of the hierarchy and very broad. People have a hard time listing this type of category since their is not a lot of similarity within the category. as they highlight the most functionally salient features of basic level categories. For instance, one of the most essential defining features of a CAR is that it is used for the transportation of personnel and objects, which is inherited from the category-wide feature of VEHICLE. Ex: vehicle and animal

Feature comparison model

for categories we store the list of features. two types: characteristic features- many but not all category members have, most birds can fly but not all. Defining features: all category members have.

Natural categories

has continuous dimensions like color since it varies among the spectrum, boundaries are a little more fuzzy (unsure of where they fit like is tomato a fruit or veggie). Some members are better representatives than others, like typical birds are eagles and blue jays, atypical would be penguins and ostriches.

spreading activation theory

has the idea of nodes, lines are different in length because it indicates the strength of the relationship. shorter lines = stronger strength. activation spreads to related concepts like red will activate roses, apples, cherries, and sunsets. Things that are closer will get more activation and faster too. Hearing red might prime you to think about related concepts, idea is based off neurons. Things that are more typical will have a closer connection to the concept you are thinking about.

Bartlett Schema

he used a technique called the method of serial reproduction. essentially the subjects read a story, recalled it from memory and passed the version to the next person. The story didnt match the persons schema, although the subjects related it to their war schema. Whatever info fit into their schema then it would be remembered better.

Typicality and family resemblance

idea was taking from looking at families, you share some features with members and not identical. Higher resemblance = more shared features with category members.

Rationalization

ideas become more compact and consistent with the readers expectations. Whatever doesn't fit in the schema will disappear. We process info in an active way that we try to make sense out of it, we try to put it into schema and slots so it makes sense to us. when we recall its a re-constructive process, we use schema's, if it doesn't make sense we will use default values (assumptions).

Basic level

intermediate, middle of the hierarchy. ex: car and dog. at this level their tends to be a high level of within group similarity and low between category similarity. has many attributes listed. between would be the similarities of dog and cats (not many). We have a natural tendency to categorize at this level

Spreading activation problems

its not computationally demanding but predictions are data driven. For example how do we know that nurse and butter are not related? just because people dont mention it doesnt mean its not related. It has many assumptions not very simple.

Schema theory

more complex types of knowledge structures and the way complex relations might be organized in LTM.

Concept ID categories

not realistic (artificial sharp boundaries), tend to be discrete categories (square vs circle and no other possibilities), and all members are equal

Rosch and Mervis : family resemblance

our ideas with typicality have to do with family resemblance. subjects were given 20 category members and they had to rank them in order of typicality. the second condition involved the same 20 categories but they were asked to list attributes for each. more typical members have higher family resemblance, which means their correlated.

Typicality effect

people verify typical category members faster than atypical. model doesn't explain why this happens, it does predict that times should be equal.

Posner and Keele: categorizing novel patterns

people were first taught categories, started out with four prototypes of dot patterns. They then created distortions starting with the prototype and moved each dot in a random direction from the original position. 2 training conditions, 1.) group one was trained on low variability distortions, 2.) group 2 was trained on moderate variability distortions. Evidence supported that they mentally formed prototypes but group 2 does better on high variability.Both groups formed prototypes, we store info on variability, this supports the prototype theory.

cognitive economy principle

refers to the fact that properties of concepts are stored at the highest possible level in the hierarchy and not re-represented at lower levels.things are not stored redundantly,categories are stored at the highest level that it makes sense. Applies to the property relations. People are faster at superset relations than property.

Attribute learning

refers to the process of discovering relevant attributes based on a known logical rule. Conjunctive and Disjunctive are easy for people to figure out then conditional and bi-conditional, once we get practice though we become good at following the rule.

Subordinate level

smaller and more specific info like a Volkswagen bug or German shepherd. Many attributes can be listed and theirs a high within and between similarities like mac and pink lady apple, categories can become confusing.

Sharpening

some details are kept or exaggerated because they fit with the schema

hierarchical network model

their are two types of relations, 1.) property = like a canary can sing and its yellow, these are characteristics of the category. 2.) superset relations like shark is type of fish, which indicate supersets that the category belongs to.the further things are away from the model the longer it will take for us to verify.

Ericsson and Polsch: semantic organization

they studied j.c and expert waiter and compared him with control undergrads. when it came to non-specific info that wasn't related to restaurant orders (j.c's expertise) he would make the same errors as the undergrads. We have specific strategies for what we tend to study.

Bi-conditional rule

uses the logical relation "if and only if" ex: if the object is red then it must be squared to be a member and if the object is squared then it must be red to be a member. Restriction on red things and squared things, yellow triangles and blue circles can be a part of it.

conditional rule

uses the logical relation "if, then". ex: if an object is red then it must be squared to be a member of the category. Their are restrictions on red things, they must be squared. Although anything yellow, blue, or green doesn't have to be squared.

Conjunctive rule

uses the logical relation of "and" ex: if an object is red "and" squared, then it is a member of the category. It has to have both the feature of being red "and" squared.

Disjunctive rule

uses the logical relation of "or". ex: if an object is red or square, then it is a member of the category.

limitations to prototype models

we cant find mathematical averages when we try to categorize age or marital status because these are discrete categories. Categorization behavior in these context supports feature frequency. when dealing with smaller categories we might use feature frequency, for larger categories it would be best to use prototype so we don't waste time storing all the features

Semantic networks

we have nodes that represent concepts, pointers represent relations amongst concepts (semantic organization). We have concepts and connections within info in our long term memory. Theories use a sentence verification task, you decide if sentences are true or false and they measure reaction time. Faster reaction times means concepts have closer connections within the network.

feature frequency theory

we store members of categories and their features. When we see something new we compare its features with all its features of all its members and place it in the category with the most feature overlap in common.

Categorization

when we group objects or events in a way that we feel they are related in some way.

Problems for feature comparison model

you can only go to step 2 for atypical things. it can deal with the category size effect, but its computationally demanding and time consuming. The second problem is that people argue that their is no such thing as defining features. If a birds feather gets plucked out is it not a bird anymore? we can use this on our everyday perception of categories. third is that predictions are data driven than theory driven.


Set pelajaran terkait

Intro to Social Research - Exam 2

View Set

Chapter 5: Texas Statues and Rules Common to Life and Health Insurance

View Set