PSY120 CH8

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

Noncompensatory Models Conjunctive model and bounded rationality

Models that do not allow for positive dimensions to compensate for negative dimensions are called Noncompensatory Models. One such model is the Conjunctive Model. In the conjunctive model, one evaluates each of the dimensions for an alternative. If each of the dimensions is above some minimum, then that alternative is selected. This equates to buying the first car that meets your minimum for Cost, MPG, and Color. It may seem odd to think about decision making in this way as you could arrive at very different choices depending on which alternatives you evaluated first. However, Herbert Simon said that we have limited cognitive resources and that we cannot evaluate many alternatives at once, so we often use a simpler strategy such as the conjunctive model. He referred to this as a satisficing search, which is a combination of the words satisfy and suffice. This is in line with Simon's idea of Bounded Rationality that states that when making decisions we are limited by the available information we have, our cognitive limitations, and the amount of time we have to make a decision. Because of these limitations, we often make decisions that are less than optimal.

syntax- Noam Chomsky

Much of our understanding of syntax comes from the work of Noam Chomsky. His theory of syntax has evolved considerably over the years. In his early work, he argued that sentences can be parsed according to phrase structure rules (Figure 8.6). Sometimes the phrase structure rules can parse a sentence in more than one way. When this occurs, there is an ambiguity in the sentence. A famous example of an ambiguity that can be explained by phrase structure rules comes from something Groucho Marx once said in a movie. He said, "While hunting in Africa, I shot an elephant in my pajamas. How an elephant got into my pajamas I'll never know." Marx was having fun with the two possible meanings of the first sentence. Phrase structure rules can explain this ambiguity because there are two different tree diagrams for this sentence. The question is where the prepositional phrase in my pajamas attaches. Does it describe the elephant or does it describe what Marx is wearing when he is shooting? Later Chomsky introduced transformational grammar to explain how sentences were related. For instance, the passive sentence, "The beer was drunk by the woman" can be explained with a transformation rule that converts the active kernel sentence "The woman drank the beer" into its passive form. There can also be multiple transformations on one sentence. "Was the beer drunk by the woman?" would involve both passive and a question transformation.

Orthographic Depth

Not all alphabetic languages map letters to sound with the same transparency. The consistency with which a given grapheme maps onto only one phoneme is referred to as Orthographic Depth. Languages such as Italian and Serbo-Croatian have relatively shallow orthographies because the graphemes more reliably map onto the phonemes of the language than in a language with a deep orthography like English. For example, in English the grapheme ea maps onto different phonemes as in fear, bear, steak, heart, and weak. One of the reasons English has such inconsistencies is because it is a true mongrel of a language. It has roots in and has borrowed from many languages. Because of this, you really can't expect it to be all that consistent As written language relates directly to the sound of a language, the development of a child's phonological awareness is paramount to learning to read. Phonological awareness refers to one's understanding of the phonological structure of a language. It is measured by having children perform rhyme judgments, deciding if two words have the same initial sound, and other tasks designed to assess children's understanding of the phonological code. Research has indicated that children doing well on tests of phonological awareness are less likely to have reading difficulties than are those who do poorly on tests of phonological awareness. One of the main problems in developing phonological awareness is figuring out what a phoneme is and being able to detect them in the spoken language. Phonemes are abstract entities, and aren't real parts of the auditory speech stream. Remember the example above where we saw that the /p/ sound in spin and pin are auditorily different. Different sounds mapping onto the same phoneme happens in many instances, particularly for consonants. To be able to read, the child must be able to learn which sounds fall into which phoneme categories, and formal instruction is often necessary to accomplish this. As the child learns which letters relate to which sounds, their ability to detect the individual phonemes in words increases. This in turn facilitates their reading ability. That is, learning to read changes the child's understanding of the spoken language. As the child begins to understand what phonemes are, they can begin to associate the written graphemes to the proper phonemes, and with practice, the child becomes proficient at relating the written and spoken forms of the language. What exactly has been learned, though, is a matter of debate. According to one view, we learn a set of rules that allow us to convert graphemes to phonemes. So, you would have a rule that would say c sounds like /k/, a sounds like /a/, and t sounds like /t/. Putting those together would give you /kat/, the pronunciation for cat. In addition to the rules, this view states that we also learn a direct association between the whole word written form and the whole word sound form. For instance, we learn that cat sounds like /kat/. Using this method is similar to looking up the pronunciation in a dictionary. You may wonder why we need grapheme to phoneme conversion rules at all if we form a direct association between the two whole word forms. Some evidence for the rule system is supported by the fact that we can read nonwords. You probably have no trouble pronouncing trilt or meve, although it is unlikely you have seen either of these letter strings before because they are nonwords. Similarly, we are able to sound out new words when we encounter them when reading. Other evidence comes from a type of acquired dyslexia where individuals cannot pronounce nonwords. It is as if they have lost their ability to use the rules of the language. Not everyone agrees that we learn rules and form associations between whole word forms. Some contend that children use the statistical structure of the language to learn to read. These researchers don't believe that we ever formally learn a rule. This, hopefully, sounds familiar because the argument is very similar to what we discussed above in relation to spoken language development. In support of this, PDP models have been built that are able to learn to pronounce both words that do and words that do not follow the rules. They are able to do this despite not having any rules programmed into them or even having units that represent whole words. There is no cat unit or /kat/ unit in the model. As with all PDP models, the model learns to associate one pattern with another, in this case orthography with phonology, by adjusting the activation pattern between simple neuron-like units. Both views are intriguing and have data to support them, but more research is needed before it is clear which view is better at explaining how we learn to read.

Subjective Expected Utility

Now that we have established that people are really bad with probabilities we need to incorporate this into our model. We will do this by replacing the objective probability with subjective probabilities (SP), and the resulting formula is known as Subjective Expected Utility. To keep with our example, consider a gambler whose utility of a win is $15 and the utility of a loss is -$5. Based just on utility, we would predict that they would not play the dice game as the expected utility is -1.67. You can verify this with Expected Utility equation above. However, if our gambler was really, really bad with probabilities and for some reason thought their probability to win in the dice game was 2/6 and their probability to lose was 4/6, then our prediction changes. We can see that using both utility and subjective probability the prediction would be that the person would play the game because subjective expected utility is positive. photo We started with the expected value model that gives an objective assessment as to whether a decision involving known values and probabilities is a good one. Expected value gives us a metric to gauge whether a decision maker's behavior deviates from what one should do. As our behavior often does differ from what would be considered optimal, we had to modify the expected value formula to include utility and subjective probability. Doing this provided a more descriptive model and allows us to more accurately predict what one will do in a given situation.

Dealing with Probabilities in Decision Making

Often when we make a decision, we rely on probabilities to help us decide what to do. In this section, we will look at some of the ways that probabilities influence our decision making. I will start by describing an objective model that bases decisions on both the probabilities and values associated with the possible outcomes. It's a simple model that will lead to a rational choice, but as we have already seen, people often behave in ways that are not rational. For this reason, we will modify the model. In essence, we will replace everything objective in the model with subjective estimates. This may not lead the best decision, but it will make the model more likely to predict human decision making, and that is our goal. One of the themes that will emerge in this section is that most people have trouble with probabilities, and as we will see, early research painted a pretty grim picture when it comes to our ability to make correct judgments about the probability of one event given that another event occurred. For example, what is the probability that you have a disease given that you tested positive for the disease? More recent research, however, indicates that we may not be as bad at making these type of judgments as originally thought. It turns out most people do OK with these types of judgments, provided you ask the question right.

Gamblers Fallacy

On August 18, 1913 in a Monte Carlo casino, 10 spins of the roulette wheel had resulted in black. People started to crowd around the table and place bets on red, assuming it was overdue and that red's probability was increased. A few more spins and red had still not occurred, only black. Now people really started to think that red was past due and began to double and triple their bets on red, but the wheel kept spinning black. All told, black turned up on 26 consecutive spins that day and a lot of people lost a lot of money. The reasoning of the gamblers that red was overdue because it had not occurred recently is an example of the Gambler's Fallacy, which is the incorrect belief that past independent events can influence current events (Figure 8.26). If black has come up on the past 10 roles of a roulette wheel, people may reason that red has a better chance of occurring on the eleventh spin than its normal probability of .47, assuming there are two green spaces on the wheel. Of course, this is incorrect because each roll of the wheel is an independent event. As another example, suppose I flipped a coin 30 times and each time the result was a head. What do you think the probability of a tail is on the next flip? If you said anything greater than .50, you committed the gambler's fallacy. Each flip of a coin is an independent event and past flips have no bearing on the current flip. The probability of a head is .5 on each flip, even if the previous 1,000 flips were tails. Amos Tversky and Daniel Kahneman argue that the reason that the gambler's fallacy exists is because of a mistaken view of how randomness works. People expect a random event like flipping a coin to look random over the short term, but randomness works over the long term. Within a random sequence, there will be strings that look nonrandom. It is important to realize that the gambler's fallacy only applies when the trials are independent such as flipping a coin or spinning a roulette wheel. Not all games of chance involve independent events, though. In blackjack, a number of decks are shuffled together and then cards are dealt from this stack of multiple decks. After each hand, the cards that were dealt are discarded and the dealer deals more cards from the stack. In this case, the game has a "memory." What was dealt on previous hands affects the probabilities of subsequent hands. Notice how this is different from the roulette wheel or a coin flip. The fact blackjack has a memory has led many gamblers to try to take advantage of it by counting cards. What card counters try to do is exploit the fact that in blackjack the hands are not independent. Not surprisingly, the casinos are not a fan of this as it tips the odds in the players favor. To counteract this, many casinos today have introduced machines that are designed to shuffle all the cards after each hand so that the hands become independent just like the spins of a roulette wheel.

Noncompensatory models lexicographic model

One final noncompensatory decision model is the Lexicographic Model. As with the elimination by aspects model you start by choosing the most important dimension. You then select the alternative that is highest on that dimension. If there is a tie between multiple alternatives on the most important dimension, then you compare them on the second most important dimension. You keep doing this until there is only one left. Both the lexicographic model and the elimination by aspects model use an intradimensional search of the information. A single dimension is evaluated across alternatives. This is in contrast to the conjunctive model that involves an interdimensional search where all dimensions are considered at one time for an alternative. At this point, you may be wondering which model we use to make a decision. That seems to depend on the number of alternatives and number of dimensions that need to be considered. John Payne evaluated subjects' preferences for using two compensatory models (additive and additive difference) and two noncompensatory models (conjunctive and elimination by aspects). He had participants evaluate different one-bedroom apartments and decide which they would rent for themselves. They were given either 2, 4, 6, or 12 apartments varying on 4, 8, or 12 dimensions. He provided subjects with information relevant to each of the dimensions. This information was written on notecards with the dimension on the front and the value on the back so that the subject had to turn the notecard over to see the value. For instance, there was a notecard with "Rent" written on the front and on the back was $110 (the study was done in 1976). On the back of one of the "Noise Level" cards was the value "high." Measuring which notecards subjects choose to turn over gives an idea of the dimensions that they are using to make their decisions. Additionally, subjects were encouraged to think aloud as they made their decisions. Based on these two measures, the number of alternatives seemed to be the biggest determinant of the type of strategy that subjects used. If there are only two alternatives, subjects tended to rely on one of the compensatory strategies. When there were many alternatives, subjects used one of the less cognitively taxing noncompensatory strategies to reduce the number of alternatives and then switched to one of the more cognitively demanding compensatory methods.

Means Ends Analysis

One heuristic that can be used in solving problems of transformation is the Means-Ends Analysis heuristic. This heuristic involves (1) noting the current problem state, (2) noting the goal state, and (3) establishing a Subgoal to reduce the discrepancy between these two. At the beginning of the Tower of Hanoi problem, an optimal subgoal would be to get the largest disk on the third peg. In Figure 8.19, this would involve going four states down the right hand side. Although subgoals are often helpful, there are a couple of problems that can arise from forming subgoals. For one, it is not always obvious what an appropriate subgoal would look like. Also, sometimes what looks like a valid subgoal is not on the most direct solution path. If the problem solver moves towards this nonoptimal subgoal, they will end up at a state that does not lead, at least directly, to the solution. As an example, moving four states down the left hand side of Figure 8.19 leads to a nonoptimal subgoal where the red disk is on the middle peg and the other two disks are on the third peg. If the problem solver where to end up in this state, there is no way for them to solve the problem without undoing a previous move or making many unnecessary moves.

Language development

One of the great debates in psychology, B.F. Skinner's behaviorist theory versus Noam Chomsky's cognitive theory of language. At the heart of their disagreement is the perennial nature versus nurture debate. After covering these two theories, we will look at whether there is a critical period in which humans are ideally suited to learn a language, some of the evidence that language learning is really statistical learning, and the difference between learning a spoken language versus a written language.

Problems of Arrangement

Problems of Arrangement require the problem solver to take the elements of a problem and arrange them in some way to satisfy a criterion. The two-string problem and Duncker's candle problem used by the Gestalt psychologists are both problems of arrangement. In the two-string problem, the subject is given two strings and a number of objects that they must arrange in a certain way that will allow them to tie the two strings together. In the candle problem, the subject needs to arrange the objects so that the candle can be attached to the wall. As you should recall, both of these problems were used to explain functional fixedness. Sometimes a problem of arrangement can be difficult because we only think of the elements of the problem in their most common uses. When we break out of that way of thinking, we often obtain the insight that the Gestalt psychologists were so fond of. Another example of arrangement problems is anagrams. Here's an anagram for you try and solve yhpgocyslo. In trying to solve this anagram, you need to arrange the letters into a word. Here's a hint. It's in the title of this book. Do you see the similarity between the two-string, candle, and anagram problems? All require you take the elements of the problem and arrange them in a particular way to arrive at the solution. Even though these problems seem very different, they are similar in the way the solution is reached.

Gestalt Work on problem solving

The Gestalt psychologists were a group of psychologists in Germany who were particularly productive during the early 1900s. They are probably most well known for their principles of perceptual organization (see Chapter 4), but they also made significant contributions to the area of problem solving. According to the Gestaltists, one way to solve problems was to rely on previous experiences. They called this Reproductive Thinking. Although relying on previous experiences is often helpful, it can sometimes be a hindrance. A good example of this is Functional Fixedness. Functional fixedness refers to a cognitive bias to only think about an object in terms of its most common function. Norman Maier's two-string problem illustrates how functional fixedness impedes problem solving (Figure 8.12). In the two-string problem, a subject would be placed in a room that had two strings that were connected to the ceiling. The subject was told that they were to tie the two stings together. Also in the room were pliers and a few other items. Initially, subjects would grab one string and try to reach the other one. This quickly proved to be futile as the two strings are not long enough to allow this strategy to work. The solution to the problem is that one needs to take the pliers and tie it to the end of one of the strings and then swing the string back and forth using the pliers as a weight. Functional fixedness prevents many from solving this problem because they tend to think of pliers as something that is used to grasp an object. They don't think of them as a pendulum weight. Another example of functional fixedness can be found in subjects' difficulty in solving Karl Duncker's candle problem. In the candle problem, subjects were given a candle, a box of tacks, and some matches. They were told to figure out a way to attach the candle to the wall. The solution to solving the problem is to take the tacks to pin the box to the wall. Then the box can serve as a stand or the candle .Functional fixedness prevents many people from solving this problem because they think of the box as a container, rather than as a candle stand. Past information can also interfere with problem solving when a mental set is formed. Mental set refers to only using solutions that have worked in the past when trying to solve current problems. Luchins and colleagues studied mental set with the water-jar problem (Figure 8.14). In the water-jar problem, subjects are given three jars of varying size and asked how they could pour water back and forth between them to arrive at a certain quantity of water. As an example, suppose you were given a jar that holds 20 cups (Jar A), one that holds 15 cups (Jar B), and another that holds 3 cups (Jar C). Using these three jars how could you end up with exactly two cups of water in one of the jars? The solution is to fill up the 20-cup jar and pour it into the 15-cup jar. This will leave 5 cups in the 20-cup jar. Filling the 3-cup jar with this will leave 2 cups in the 20 cup jar, which is the answer to the problem. So, we would say the solution is Jar A - Jar B - Jar C, or more simply A - B - C. What Luchins and colleagues did was give a group of subjects water-jar problems that all had the solution B - A - 2C. The interesting finding was that after participants had solved a few problems using the B - A - 2C solution they continued to apply this solution to future problems even when there was a simpler solution to the problem. Furthermore, when given a problem that had only a simple solution and the B - A - 2C solution would not work, many were not able to solve the problem. Luchins claimed that this failure was due to a mental set. Once participants learned the B - A - 2C solution they continued to use it even when it wouldn't work, and they were not able to see the solution to the simple problem until it was shown to them. Although the Gestaltists recognized that reproductive thinking was an important part of problem solving, they were also very interested in Productive Thinking, which involves restructuring of the problem. Often this restructuring leads to the sudden realization of the answer. They referred to this as Insight. The two-string problem we discussed in relation to functional fixedness was also used to study insight. Remember that to solve the problem, the subject needs to tie the pliers to one of the strings, creating a pendulum. However, many subjects had trouble with the problem and were not able to solve it. After ten minutes of trying to solve the problem, the experimenter would enter the room and appear to accidently brush one of the cords. Doing this would set the cord in motion swinging. After seeing the cord swinging, many people realized the solution and solved it quickly. The Gestalt interpretation of this is that when the cord was swinging the problem became restructured and insight occurred. Without question, insight is certainly an appealing idea. Most of us have had that "ah hah" moment, when the solution to a problem seems to flash into our mind. Unfortunately, the Gestaltists were never clear on exactly what caused insight and failed to provide any convincing evidence in support of it. However, recent brain imaging research has been more promising. Researchers have found that when subjects report solving a problem based on insight, their brain activity is different than on trials when they do not report using insight. The brain even gives off a distinct electrical pattern a few 100 milliseconds before subjects report having insight. Research like this makes a strong case in support of insight. Nevertheless, more research is needed to understand exactly when and how the cognitive system uses insight. Before ending our discussion of the Gestalt work on problem solving, there is one more problem that we need to address. Before reading any further, see if you can solve the problem in Figure 8.15 that is based on the work of Maier. Most people have difficulty with this problem and can't figure out the solution without some help. What is interesting about this problem is that the solution is easy once you realize that you cannot solve it because you have placed an unnecessary constraint on the problem. Constraints that the problem solver places on the problem, but that are not part of the problem are called Unnecessary Constraints. Often these unnecessary constraints will make the problem impossible to solve. Sometimes an unnecessary constraint can be removed with a hint, so here is a big one. What if I told you that the expression "think outside the box" seems to have originated in relation to this problem? Did that help give you that "ah hah" moment? No? To see the solution go to Figure 8.16. If you were unable to solve the problem, when shown the solution your initial reaction may have been that it is not fair to go outside the dots, but that's the trick. Never did the problem say that you could not go outside the dots. You placed that constraint on the problem. The only constraints stated in the problem are that you cannot remove your pencil from the paper or retrace a line. To be an effective problem solver, you have to understand when something is a constraint, but you must also realize when something is not a constraint.

Phonemes

The basic sounds of a language are referred to as Phonemes. There are well over 100 phonemes across all languages, but each language only uses a limited set of the total possible phonemes. In English, there are around 40. Phonemes are the smallest unit of sound that can lead to a change of meaning. Consider the words men and pen. The difference between these two words is due to the /m/ phoneme being changed to a /p/ phoneme. Because there are variations in how a phoneme actually sounds within a language, it is useful to think of phonemes as categories of sound. For example, the /p/ sound in the words spin and pin are actually different sounds. You can verify this for yourself. Put the back of your hand in front of your mouth and say the two words. You should feel a stronger release of air when you make the /p/ sound in pin than in spin. Even though these two sounds are different, they are treated as the same because they are both categorized as the /p/ sound by English speakers.

Problems of Transformation

The last of Greeno's categories are Problems of Transformation. Problems of transformation consist of an initial state, a goal state, and a set of rules that allow one to move between the initial state and goal state. The Luchins water-jar problem that we discussed above is an example of a problem of transformation. The initial state is three empty jars. The goal state is a certain quantity of water in one of the jars. The rules specify that you are able to fill the jars and pour water back and forth in any manner necessary. Another problem of transformation that has been the focus of much attention in cognitive psychology is the Tower of Hanoi problem (Figure 8.18). The Tower of Hanoi problem consists of three pegs and some number of disks that are placed on the pegs. In the initial state of the game, all of the disks are on the first peg. The goal state is to get all the disks onto the final peg. The rules for moving the disks are that you can only move one disk at a time, a larger disk cannot be placed on top of a smaller disk, and that you can only move the top disk from one of the pegs. Notice that the problems of transformation differ from the other two types of problems as the goal state is clearly stated. Be this as it may, problems of transformation are sometimes difficult because it is not clear how to most efficiently move from initial state to the goal state. Problems of transformation and how to solve them were studied extensively by Newell and Simon. It is their research that we turn to now.

Decision making models- additive model

The models we will cover here are concerned with the processes used to reach a decision. One of the simplest models is the Additive Model. Using this model, one assigns a value to each of the dimensions for each of the alternatives one is trying to decide between. These values are then summed up and the one alternative having the higher sum is the one that is chosen. Let's look at an example. Suppose you were trying to decide between purchasing two cars. In this simple example, you have two alternatives (Car A and Car B). You have identified three dimensions that are important to you (Cost, MPG, and Color). To use the additive model you need to assign a value to each of these three dimensions for each car. Let's say you gave the ratings below where you used a scale that ranged from -4 (highly undesirable) to +4 (highly desirable). Based on these ratings, you would choose Car A. When using the additive model, it is important to make sure you identify all of the dimensions that are important to you. If you had ignored MPG, then the sum for Car A would be (+1) and the sum for Car B would be (+2), leading you to choose Car B. In addition to selecting alternatives that are important to you, you need to also make sure you think about the relative weight of each category. In the simple additive model, each dimension is weighted equally. However, a simple modification can correct this. If you decide that the Cost is twice as important to you as Color or MPG, you would multiple the Cost ratings by 2. The sum for Car A would now be 2 and the sum for Car B would be 4, and you would choose Car B. As this example makes clear, the additive model can lead to different choices based on the dimensions that are considered and the importance given to those dimensions.

Morphemes

The next level up in the hierarchy contains the smallest units of meaning in a language. These basic units of meaning are called Morphemes. All words consist of one or more morphemes. Two types of morphemes are Roots and Affixes. Roots are the main part of the word. Many words only consist of a root. The word cork contains only the root morpheme cork. This root cannot be broken down into any smaller unit that still conveys meaning. The root cork can be combined with different affixes to give different words though. In English, the two major types of affixes include Prefixes and Suffixes. Prefixes are morphemes that are added to the beginning of a root and change the meaning. The prefix un- can be combined with the root cork to give the word uncork. Notice that adding the prefix un- changed the meaning. A suffix is a morpheme that is added to the end of a root. The suffix -ed can be added to the root cork to yield corked. We could even put the prefix, root, and suffix together to get the word uncorked.

Problem Solving as Problem Space Search Newell and Simon

The purpose of Newell and Simon's computer program was different. They wanted to write a program that solved problems in the same way as humans. That is, they wanted to simulate human problem solving. When trying to simulate human problem solving, it is important that the simulation gets from the initial state to the goal state in the same way that humans do, including making the less than optimal choices that we are likely to make. Of course, to simulate human problem solving, they needed to have some idea of how humans solve these types of problems in the first place. To obtain this information, they would have subjects work problems and say what they were thinking while they tried to solve them. Based on these verbal reports, they wrote a computer program they called the General Problem Solver (GPS). The GPS was designed to copy the human cognitive system architecture. It processes information in a serial fashion. It has a limited-capacity memory analogous to our working memory system. It also included a largecapacity store to simulate long-term memory. Below we will look at some of the developments in problem solving that were a result of Newell and Simon's work with the GPS. In Newell and Simon's work, problem solving was viewed as a search through Problem Space, which consists of the possible states that a problem can take and the operators that allow one to move between states (Figure 8.19). As an example, in the Tower of Hanoi problem, each state represents a different way that the disks could be placed on the pegs. The move operator allows one to move between these states. Starting at the top of the triangle in Figure 8.19, we could use the move operator to move the top disk to either the middle or last peg. Each of the possibilities represents a different state the problem can be in. The idea is that we solve a problem like the Tower of Hanoi by searching for the most direct route through problem space. As we can see in Figure 8.19, the most direct route to solving the Tower of Hanoi problem requires seven moves and involves moving down the right side of the figure. Because we have a limited-capacity cognitive system, Newell and Simon reason that we often rely on shortcuts or Heuristics when searching through problem space. Heuristics can be contrasted with Algorithms, which are a set of procedures that will lead to the solution, provided one exists. Whereas algorithms are guaranteed to find a solution, heuristics are not. Heuristics will often lead to the solution, but they can fail to find the solution in some cases. The tradeoff is that heuristics are less cognitively taxing.

river-crossing problem

The river-crossing problem is one problem that causes difficulty because the solution requires one to go away from the goal state, and as such, violates the hill climbing heuristic. The river-crossing problem comes in many forms and has been around since at least the nineth century when it was published in a Latin collection of mathematical problems called Propositiones ad Acuendos Juvenes that translates to Problems to Sharpen the Young. The idea behind the river crossing problem is that you have some number of creatures that you need to get across the river, but there are restrictions on how you are allowed to move them. In one version, there are three missionaries and three cannibals and the object is to get them all to the other side (ACTIVITY). There is a canoe that can be used to cross the river, but it can only hold two people. Never can there be more cannibals than missionaries because they would eat the missionaries. Finally, at least one person has to be in the boat for it to move from bank to bank. When initially trying to solve the problem, people often use the hill climbing heuristic and try to increase the number of individuals on the opposite bank. There is a point in the problem, however, where it is necessary to move individuals back across the river to where they started. This goes against the hill climbing heuristic because you are reducing the number of people on the opposite bank.

syntax

The rules for combining words into phrases and phrases into sentences are referred to as the syntax of the language. There are rules for how words can be combined into noun phrases and verb phrases and how these can be combined into sentences. A simple rule of syntax for English would say that a noun phrase must have a noun and may include an adjective and/or a determiner. Given this rule, the young boy would be a valid noun phrase. As you probably guessed, the rules for what is and is not a valid noun phrase are lot more complex than our simple rule. In fact, the rules can even change across languages. In English, the rule would place the adjective before the noun (the white house), but in Spanish the adjective often occurs after the noun (la casa blanca). What is truly amazing is that at a very young age children begin to pick up on the syntax of the language to which they are exposed, and their expressions begin to conform to the rules of the language.

What are the additive model and additive difference model considered?

The two models we have covered so far are often referred to as Compensatory Models because they allow positive dimensions to compensate for negative dimensions (Figure 8.21). In the car example, the positive MPG and Color dimensions compensate for the lower score Car A has for cost.

When Judgement and Decision Making Go Awry

The wealth of research on decision making has uncovered some common traps that people fall into when making judgments and decisions. Like most traps, the best way to avoid them is to be aware of them. In this section, we consider three of the most researched. Having an understanding of these judgment and decision making biases will help you avoid them in the future

Categorizing Problems

There are a seemingly infinite number of problems that we could identify. To help make our discussion of problem solving more manageable, it is necessary to consider how we might go about classifying problems. One way of categorizing problems is in terms of whether they are well defined or ill defined. A Well-Defined Problem is one that has a clearly specified initial state, goal state, and allowable moves for moving between the initial state and goal state, whereas an Ill-Defined Problem fails to specify one or more of these. Mazes are a good example of well-defined problems. Many of the problems we face in life, however, are ill defined. What career to choose and who to choose as your life partner are both ill-defined problems that obviously are of high importance to most people. Another highly influential way of categorizing problems was given by James Greeno who classified problems into one of the following three categories: Problems of arrangement, problems of inducing structure, and problems of transformation. Before we review each of these problem types, it is worth noting that Greeno was not trying to suggest that every problem could be placed into one of these categories. Instead, he was trying to provide a way to understand problem solving based on the general cognitive operations that are involved in solving a given problem. Although there are many problems that don't fit neatly into one of the categories, it is still useful to consider whether a problem relies primarily on arrangement, inducing structure, transformation, or some combination of these.

Leda Cosmides and John Tooby

There have been many studies looking at how people process probabilities like the disease problem above, and the consensus is that humans are pretty lousy when it comes to calculating probabilities. That doesn't mean that we are unable to answer these types of problems, though. Other researchers have claimed that when these questions are presented as frequencies, rather than probabilities, people do much better. Work done by Leda Cosmides and John Tooby make this clear. They took the same problem that we worked through above, but presented in terms of frequencies. Here is their version of the problem. 1 out of every 1000 Americans has disease X. A test has been developed to detect when a person has disease X. Every time the test is given to a person who has the disease, the test comes out positive (i.e., the "true positive" rate is 100%). But sometimes the test also comes out positive when it is given to a person who is completely healthy. Specifically, out of every 1000 people who are perfectly healthy, 50 of them test positive for the disease (i.e., the "false positive" rate is 5%). Imagine that we have assembled a random sample of 1000 Americans. They were selected by a lottery. Those who conducted the lottery had no information about the health status of any of these people. When given this version of the problem, participants did much better. The most common response was the correct answer, 2%. You may have noticed that the frequency version of the problem contains useful information that was not included in the original problem. To test if this would help subjects correctly solve the probability version, Cosmides and Tooby also gave a probability version like the original problem, but in it, they made clear what a true positive was, and defined what a false positive was. On this version of the problem, subjects did do better than in the original study, but nowhere near as well as those did that were given the frequency version. Why are people more accurate when the problem involves frequencies rather than probabilities? The answer seems to be that we have evolved to process frequencies. Probability simply did not exist as a concept in our early evolutionary history. However, the relative frequency of events did. For instance, ancient hunters knew that prey occurred more often at one hunting spot as opposed to another. They could exploit this frequency information to increase the likelihood of capturing prey and being able to feed themselves and their family. The argument that we are finely tuned to frequency is one that you will see across many areas of psychology. When discussing language, I noted that there are those who believe we do not really learn rules, but instead we learn the statistical structure of the language, such as the relative frequency of events. There is also strong evidence that we automatically encode frequency-based information. You probably know that the most common word in the English language is the, and that the word play is more common than the word slay. However, I doubt you ever tried to intentionally learn this information. The fact that you know this is because you automatically encode frequency information. All in all, it seems we are pretty bad at using Bayes' Theorem. The lesson here, though, is not that we should start teaching people how to compute probability using Bayes' Theorem. Instead, we need to concentrate on teaching people how to convert probability information into the less cognitively taxing frequency form. Basing decisions on relative frequencies will lead to more sound decisions that those based on probabilities.

rule system for combining symbols

These examples highlight two important aspects of the rule system for a language. The rules must specify what is an allowable combination of symbols, but they must also specify what is not an allowable combination of symbols. The rules that specify what is allowable and not allowable are referred to as the Syntax of the language. Syntax allows language to be Generative (ability of language to use limited number of symbols to generate an infinite number of expressions). We can take a limited number of symbols and combine them into an infinite number of expressions. Although there are an infinite number of expressions, the language is still constrained. Now that is fascinating. Think about it. The rules allow for infinity, but at the same time place limits on something that is infinite. How can you have something that is infinite, but constrained? As odd as this may seem, there are many systems that exhibit the same characteristics. Mathematicians call them fractals.

The Levels of Language

These levels are organized hierarchically so that each level is formed by a combination of items at lower levels. At the lowest level are the sounds of the language. These sounds can be grouped together to form higher-level units that have meaning. These meaning units can be combined to form words, and the words can be combined into phrases. Finally, phrases can be combined to form sentences.

Critical Period Hypothesis

Tied to the idea of Universal Grammar is the Critical Period Hypothesis that states that there is a limited window during development in which access to Universal Grammar is available. After that period, learning a new language is significantly more difficult. There are a few pieces of evidence that support the critical period hypothesis. Any child that has normal cognitive functioning and is exposed to a language will learn to speak that language. Note that I said "speak the language." Learning to read and write the language is very different and requires extensive instruction to master. This is not the case for spoken language Children who are not exposed to language during the critical period fail to develop language in a normal fashion. The tragic case of Genie demonstrates this. Genie is the pseudonym given to a young girl who suffered from years of abuse from her father. From a young age, Genie's father would strap her into a straightjacket and fasten her to a toilet or crib. She grew up in virtual isolation and was not allowed to interact with anyone. At the age of 13, Genie was discovered and removed from her father's abusive grasp. At this time, she had no real language ability. Over the following years, there were numerous attempts to teach Genie English. Although she did progress in her language ability, particularly her vocabulary, she has never reached the competence of a native language speaker and has had particular trouble picking up the syntax of the language. .Other support for the critical period hypothesis comes from second language learning. Numerous studies have shown that second language learning is much easier earlier in life. Children learning a second language are more likely to obtain a native fluency with the language than are adults who are trying to learn the same language. Additionally, research on deaf individuals learning American Sign Language has shown that the earlier one is taught the language, the more proficient they become using it.

Utility

To make the formula for expected value more descriptive, we need to replace value with utility. Utility can be thought of as a preference or desire for an outcome. For the dice game, it would be what a win or loss is worth to the person. If you really enjoy winning money, then the utility of a win in the dice game would be greater than its value. When we replace value with utility the resulting formula is called Expected Utility (Figure 8.23). If the utility of a win were $40 and the utility of a loss was -$5, then the expected utility would be positive and the prediction would be that you would play the game. As utility is by nature subjective, it cannot be directly measured. Instead, it must be inferred from choices that an individual makes. photo We could make our model even more descriptive if we replaced the true probability with subjective probability. In the dice example we have been working with, it is easy to figure out the true probability, but this is not always the case in more complicated situations. People are notoriously bad at figuring out the true probability of an event. The trouble people have with probability can be demonstrated nicely with The Monty Hall problem that is based on the game show Let's Make a Deal whose host was Monty Hall. The problem asks you to imagine that you are a contestant on a game show. You are shown three doors and told that behind one of the doors is a desirable prize like a car. Behind the remaining two doors are undesirable prizes such as a goat. You pick one of the three doors and then the show's host who knows what is behind the doors opens one of the remaining doors to always reveal a goat. The host then asks you if you want to switch doors or keep the one you have. Should you switch doors (Figure 8.24)? Most people say that it doesn't matter because there are two doors, and they reason that there is a 1/2 chance that their door has the car, so switching should not matter. But they are wrong. You should switch the door because there is a 2/3 chance that the other door has the car. Not possible you say? This ACTIVITY will show you this is the case. To understand why switching is the best thing to do consider what the probability of selecting a car is at the beginning. You have three doors, only one of which contains a car. So the probability of selecting a car is 1/3 and the probability of selecting a goat is 2/3, because there are two doors with a goat. After the host opens a door that contains a goat, you are better off switching because there is a 2/3 chance that your initial door was a goat and only a 1/3 chance that your door was the car. The probabilities associated with your initial choice are still in effect after the host opens a door. Said another way when you switch doors you always get the opposite prize from what was behind the door you originally chose. Because you have a 2/3 probability of having selected a goat it is better to switch. It is worth noting that the host will always open a door that has a goat behind it. If they randomly choose a door to open, then it would not matter if you switched or not. I know; I know. The Monty Hall problem can be confusing, but that is the point. The Monty Hall problem shows that we often have difficulty understanding what the true probability is for an event, even when it is something as seemingly simple as choosing between two doors. By the way, the Monty Hall Problem became famous when a reader submitted it to Marilyn Vos Savant who does a column called Ask Marilyn for Parade Magazine. She gave the correct answer and said that you should switch, but she received a deluge of letters claiming she was wrong. Many of these were from individuals with PhDs, including mathematicians. It took simulations like those in the ACTIVITY to convince many people she was right. So, don't feel too bad if you had trouble with the problem. You are in good company.

Problems of Inducing Structure

To solve a Problem of Inducing Structure, one has to discover the relation between the elements of the problem. Analogies are a good example of this. Here's an example, Lemon is to sour as sugar is to ___? Too bad they are never this easy on tests like the SAT, but to solve analogies, whether it is the simple one given here or the toughest one on the SAT, the cognitive processes are pretty much the same. You have to note the relationship between the first two words (lemon and sour) and find a word that has the same relationship with the third word (sugar). In this case, that word is sweet. Another common inducing structure problem is series completion. Can you figure out what comes next in this series 65 23 60 25 55 27 _____? Just as you have to do to solve an analogy problem, you have to figure out how the elements, in this case the numbers, are related. The next number in the series would be 50 because starting with the first number every other number is decreasing by 5. The next number after 50 would be 29 because every other number starting with the second is increasing by two. As a final example of a problem of inducing structure, we will look at a test known as Raven's Progressive Matrices (Figure 8.17) that was designed by John Raven to measure reasoning ability. The test consists of a number of questions where the reader is shown a series of patterns and is asked to choose which pattern completes the set. To figure out the answer you have to understand how the elements are related. Only then can you select the correct answer.

Illusion of Control (by Proxy)

When playing dice games people tend to throw the dice harder if they need high numbers and softer if they need low numbers (Figure 8.25). Many people want to pick their own numbers in the lottery because they believe their numbers are luckier than randomly generated numbers. Of course, the force the dice are thrown with does not affect the numbers that are rolled. Just as in the lottery each set of numbers has an equal probability of being chosen. The reason people think this way, though, is due to the Illusion of Control that leads one to believe they have control over events that they have no control over. That is, they believe their probability of success is higher than the objective probability of success. Sometimes people will try to gain control of a chance event by allowing someone they perceive as lucky take control. This is known as Illusion of Control by Proxy. Allowing someone else to roll the dice because they are on a winning streak or allowing your lucky friend to pick your lottery numbers are both examples of illusion of control by proxy. Interestingly, psychologist Daniel Wegner takes the illusion of control to the extreme and states that our conscious will is an illusion. He argues that the mind's greatest trick is leading us to believe that we have conscious control over our actions. He believes that there are separate mechanisms that give rise to our consciousness and to our actions. To Wegner, consciousness is used to make sense of and understand our actions, but it doesn't necessarily cause them. So, believing we have conscious control over them, for Wegner, is just an illusion. Wegner's idea is certainly a controversial one, but it's also intriguing.

Language production and perception

When you start with an idea and then form a spoken or written sentence to express that idea, you are using Language Production- to express language through speaking or writing Your audience that is receiving the message is engaging in Language Perception. An important part of language perception is comprehension. To comprehend language we rely on our understanding of what is coded in the language itself such as the meaning of words (Semantics) and our understanding of grammar (Syntax). We also rely on information that is not directly related to the language. This is referred to as Pragmatics. Examples of pragmatic information include the context of a conversation, intent of the speaker, and other factors.

From Spoken Language to Written Language Orthography

You may have heard the old saying that you learn the three R's (reading, writing, and arithmetic) in school (Figure 8.11). If we could acquire written language with the same ease as spoken language, there would only be one R to master, and the curriculum within schools would be very different. The written form of language is referred to as Orthography. Most of the orthographic systems used today can be classified as one of three types based on how they relate the written units to the spoken units of the language. In a syllabary writing system, such as Japanese Kana, the written symbols map onto syllables. In a morphosyllabic system like Chinese, the written symbols map onto syllables that are usually morphemes. The final type of writing system is alphabetic, and it is what we will be concerned with here.

behaviorists

behaviorists are interested in stimulus-response relations. What happens between the stimulus and response could not be studied scientifically according to behaviorists and was referred to as a black box (Figure 8.1). Well, that was a problem for those interested in cognition because cognition is what happens between stimulus and response. Cognition is inside the black box, and if it cannot be opened, then cognition cannot be a topic of study. It is worth mentioning that behaviorism was in many ways confined to research in the United States and never really caught on in other countries. During behaviorism's heyday, researchers in Europe were studying the topics of problem solving, memory, schemas, and cognitive development. Nevertheless, it would take some time before researchers in the United States joined in the fun.

semantics

the meaning of a word or groups of words dog= barks, fur, claws Can you think of any other features? The point we want to make is that the symbol dog does not have anything in common with the thing it denotes, such as barks, has fur, and has claws. Rather, we have to learn to map the symbol dog onto its underlying meaning

thinking

thinking is an almost hopelessly broad term and could include many, many things. We will limit our discussion of thinking to two areas that have received a lot of attention in cognitive psychology, problem solving and decision making.

language

to express language through speaking and writing

exchange errors

which are those slips of the tongue we have all made at one time or another. Exchange errors occur during speaking when two units of language get swapped within an utterance (Figure 8.7). Importantly, exchange errors occur within the same level, lending support to the argument that the different levels are psychologically real. There are many different types of exchange errors that give us a glimpse into the cognitive system. The three that I will discuss are phoneme exchanges, morpheme exchanges, and word exchanges. phoneme exchanges-occur when two phonemes switch places within an utterance. When the exchanges involve the initial consonant(s) of two words they are sometimes called Spoonerisms after the Reverend William Archibald Spooner who was reported to have made these during his sermons, although there is little direct evidence to support this. One example attributed to Spooner is when he said, "The weight of rages will press hard upon the employer." What he intended to say was the "rate of wages." Notice how the /r/ and /w/ sound were exchanged. Another interesting property of phoneme exchanges is that consonants exchange with consonants and vowels exchange with vowels. This indicates that consonant and vowels are processed separately. In morpheme exchanges, two morphemes change place in the utterance. "He lifts the booked" when the intended saying was, "He lifted the books." Word exchanges are (you guessed it) when two words trade places. Saying, "I took the vet to the dogs" instead of "I took the dogs to the vet." The next time you have a slip of the tongue, remember that we all make them, and that they provide an important clue to our understanding of how language is processed.

behavioristic viewpoint

According to the behavioristic viewpoint, language was just another behavior that is learned through operant conditioning. This view of language was most clearly articulated in B.F. Skinner's book Verbal Behavior (1957). Skinner argued that children learn language through reinforcement in much the same way that a pigeon learns to emit a behavior in an operant conditioning chamber (see Figure 6.17). When initially learning a language, children learn to imitate the sounds they hear in their environment. Those that are reinforced, for example by Mom's praise, tend to be repeated in the future. Those that do not receive reinforcement are extinguished. In a similar fashion, the behaviorist view maintains that children pick up the syntax of the language through imitation and reinforcement. As children get older, they begin to join words into groups. If these utterances are reinforced by the parent, then learning occurs, and the child is more likely to use it in the future. An expression like, "Give me milk" would be reinforced by the parent if they provided milk to the child, and the expression would be more likely to be used again next time the child wanted milk. On the other hand, if the child said something like, "Milk me give" the parent would be less likely to give milk, meaning reinforcement would not occur and the response would be weakened. A crucial assumption of his theory is that language is learned by interaction with the environment, and that there is nothing innate about the human being that allows for our ability to learn language. Skinner's view is clearly on the nurture side of the nature versus nurture debate. On the nature side of the debate is the nativistic view of language. Nativism is the view that we are born with certain innate cognitive abilities. One of the leading proponents of nativism is Noam Chomsky (Figure ) who wrote an influential critique of Skinner's Verbal Behavior. The impact of Chomsky's review was such that many consider it a watershed moment in psychology that helped usher in the cognitive approach. According to Chomsky, imitation and reinforcement are insufficient to explain language development. As we saw earlier in the chapter, language is infinite, and because of this, Chomsky feels that any theory of language development needs to allow the developing child a mechanism to deal with the potentially infinite expressions they could produce. He believes that it is unrealistic to believe that a child can learn an infinite number of expressions only through imitation. Instead, Chomsky believes that we are born with an innate mechanism that allows us to rapidly acquire the rules of language when we are young. He referred to this as the Language Acquisition Device, and in later work, he further developed this idea into what became known as Universal Grammar. Chomsky believed that our brains are hardwired with a Universal Grammar that consists of a set of rules that allow us to learn any language. The developing child uses the Universal Grammar to learn the rules of the language to which they are exposed. As evidence for his view, Chomsky pointed out that often children make mistakes as if they had misapplied a rule. One common mistake that all English children make is to misapply the rule for forming past tense. The child may learn that words like walked and talked mean the past tense of walk and talk. Having noted the regularity between these words, the child forms a rule along the lines of add -ed to a verb to make it past tense. Of course, this rule has exceptions, and its misuse leads to the child saying something like "I doed it yesterday" when they meant to say "I did it yesterday." These regularization errors are taken as evidence that the child has learned a rule, but has applied it incorrectly.

Additive Difference Model

Another model that has been suggested and that is closely related to the additive model is the Additive Difference Model. Using this model, you would again assign values to each of the dimensions. Then a difference score is calculated for each dimension by subtracting the value for the second alternative from the value for the first alternative. The difference scores are summed, and if the sum is negative, the second option is the better option. If the score is positive, then you would chose the first option. Returning to our car example, we can see that the additive difference model leads to the same conclusion. Based on our original ratings, we would choose Car A. Just as with the additive model, it is important to include all important dimensions and consider their relative importance. Though the additive model and the additive difference model lead to the same conclusion, there is an important distinction between the two. In the additive model, an interdimensional strategy is used where all dimensions are considered within an alternative at once. The additive difference model uses an intradimensional strategy. That is, alternatives are compared directly on dimensions. This comparison leads to the difference scores and allows one to evaluate what dimensions are driving the decision. The higher the absolute value of a difference score, the larger impact that dimension is playing on the final decision. In our car example, each of the dimensions is pretty equal in terms of the impact they have on the outcome. One final consideration in using the additive difference model that needs to be mentioned is how to handle the situation where there are more than two alternatives. In this case, you can compare the first two just as we did the in the car example. The one that is the best choice is then compared to the third option. This can be repeated until all alternatives have been compared and a choice has been made.

Moving Beyond Probailities Bayes Theorem

Another model that is often used in understanding how probabilities affect judgment and decision making is Bayes' Theorem. Before we look at Bayes' Theorem, it is useful to consider an example given by Casscells and colleagues. "If a test to detect a disease whose prevalence is 1/1000 has a false positive rate of 5% what is the chance that a person found to have a positive result actually has the disease, assuming you know nothing about the person's symptoms or signs." This question was posed to both faculty, staff, and students at Harvard Medical School. The most common estimate was 95%. Only 18% of those polled gave the correct answer, which is 2%. This answer may seem surprising, but it is because the base rate for the disease is so low. It only occurs in 1/1000 people. Bayes' Theorem allows us to calculate conditional probabilities, such as the probability of a disease given a positive test, and in calculating the conditional probability the base rate, also called prior probability, is taken into account. A conditional probability is represented as P(A|B) and is read as "the probability of A given B." A conditional probability is simply the probability that some event (Event A) will happen provided or given that something else has happened (Event B). The events can be just about anything. For the disease problem above, we can write Bayes' theorem as follows: photos As Bayes' theorem makes clear, the low base rate of the disease leads to a low conditional probability of having the disease given a positive test. What we really care about is not Bayes' Theorem per se, but why people are so bad at estimating probabilities like this. One reason seems to be that we neglect base rate information. When presented with a problem like the one above people do not consider how likely the event is in the population. All things being equal the lower the base rate is for some event, the lower the conditional probability should be for that event. To see that this is true try calculating the conditional probability above assuming the base rate of the disease is 1/100,000. Did you get the answer, .0002 or .02%?

Noncompensatory model elimination by aspects model

Another noncompensatory model is the elimination by aspects model. To make a decision with this model you start by selecting the most important dimension (or aspect) and evaluate all alternatives on this dimension. Any alternative that is not above some minimum standard on this dimension is eliminated. Next, you do the same thing for the second most important dimension. You continue in this manner until there is only one alternative left. When using elimination by aspects, it is critical that the dimensions be considered in order of importance. Different orderings of the dimensions can result in drastically different choices.

Hill Climbing Heuristic

Another well-known heuristic is the Hill Climbing Heuristic (Figure 8.20). To use this heuristic, you simply change the current state of the problem to one that appears to be closer to the goal state. This will lead to another state, and from this state, you would choose the state that gets you closer to the goal state. You continue to apply this heuristic until you reach the goal. This heuristic derives from the idea that if you are climbing a hill and cannot see the way up, you choose the path that is the steepest. The hill climbing heuristic can successfully lead to the solution in many cases. However, it can be a hindrance to problem solving when one needs to take a path through problem space that may not seem the most direct given the available choices. In terms of literal hill climbing, this would equate to taking a path that goes back down the hill and then eventually heads back up. This could be the best choice, though, if another path goes straight up the hill, but is blocked by a fallen tree.

cognitive psychology William James, Hermann Ebbinghaus, and Franciscus Donders

As early as the 1800s, William James had laid some of the initial groundwork for a highly influential class of cognitive models referred to as connectionist models; some of the earliest empirical work on memory and forgetting was done by Hermann Ebbinghaus; and the first methods of how to use reaction time to study cognition were laid out by Franciscus Donders.

Relationship Between Language and Thought

Benjamin Whorf was a fire insurance inspector, but his real interest was in linguistics and studying Native American languages. To satisfy his interest, he started taking courses at Yale while continuing to work in the insurance industry. As he advanced in his studies and continued his research on Native American languages, Whorf formed a theory of how language and thought interact. An important part of his theory was the idea of Linguistic Relativity. According to Whorf, a person's language influences the way they think about the world. The theory of linguistic relativity, or as it is sometimes called, the Whorfian hypothesis has proven controversial since its inception. Whorf based his theory on findings that some Native American languages had multiple words for the same concept. One example he gave was for the concept of snow. He argued that in English, we use the same words for the concepts of falling snow, slushy snow, wind-driven snow, and other versions of snow. However, Whorf claimed that Eskimo languages have many words for snow to reflect all of these different variations. Because of the difference between the English and Eskimo languages in how they represent the concept of snow, Whorf believed that speakers of the languages should think and perceive snow in very different ways. It should be noted that later critics have argued that English is the same as we have words like slush and blizzard. They have also noted that the Eskimo do not actually have that many words for snow. Regardless, linguistic relativity caught the attention of researchers and became an active topic of study within the field. Over the years, researchers have argued for two different versions of the theory. The strong version of the theory states that language determines thought. This version is sometimes referred to as linguistic determinism. To test the strong version, many researchers turned to color perception. It may seem odd, but languages vary in the number of words they use to represent color. The most striking example of this can found in the Dani Tribe of New Guinea. The language used by the Dani includes only two words for color. The word mili is used for cool/dark colors such as blue and green, whereas mola is used for warm/light colors such as yellow and red. The strong version of linguistic relativity would predict that a person from the Dani Tribe should perceive color differently than someone that speaks English and has different labels for blue, green, red, and yellow. But that does not seem to be the case. Eleanor Rosch found that the Dani could discriminate between blue and green and between red and yellow just fine. It seems that having only two color words does not determine the way the Dani perceive color. Research like Rosch's has led many to reject the strong version of linguistic relativity (Figure 8.27). Because research has indicated that the strong version of linguistic relativity is not supported, a weak version of linguistic relativity has been put forth. The weak version says that language affects or influences thought. This seems more reasonable, and there are some data to support it. Later research on color perception has shown that speakers of a language that maps multiple colors onto one word have more difficulty discriminating between those colors than do speakers of a language that has a one color to one word mapping. Other research has moved beyond color perception and has shown that language can affect such diverse topics as perception of time, counterfactual reasoning, and thoughts about motion. On balance, the safest conclusion seems to be that the language we speak affects how we perceive and think about the world, but it certainly doesn't determine it. Given that the language we speak can influence our thought, the next step is to question whether variations within our language influence thought. Does the wording choice you use to express an idea influence how people will think about it? The short answer is yes. We have already seen one example of this above with the framing effect. As we saw, you can get opposite patterns of endorsement based on how a problem is worded. Another example of how wording choice influences thought comes from the work of Elizabeth Loftus and colleagues. They had subjects watch a film of a car crash. After viewing the cars crash, some subjects were asked how fast the cars were going when they "smashed" into each other. Other subjects were asked how fast the cars were going when they "hit" each other. Interestingly, the group that was given the "smashed" question estimated a faster speed than did those given the "hit" question. On a test a week after viewing the crash, subjects were asked if they had seen any broken glass when the cars crashed. There was no broken glass. Nevertheless, those in the "smashed" condition were more likely to incorrectly say that they had seen broken glass than were those in the "hit" condition. In a similar study, subjects saw a car crash and then were given the question "Did you see a broken headlight" or "Did you see the broken headlight". Those given the version with the definite article the were more likely to incorrectly say that they saw a broken headlight. As a final example, consider how word choice can bias our thoughts. Euphemisms are a great example of this. Instead of saying that an employee has been "fired," they have been "let go" or there has been a "reduction in workforce" (Figure 8.28). When talking to their clients, investment advisors will refer to a stock that has lost the client money as an "underperforming stock." The point of a euphemism is to present something negative in a more positive way. Interestingly, when used enough a euphemism can take on the negative connotation of the word it was designed to replace. Steven Pinker calls this the euphemism treadmill. He gives the example of garbage collection turning into sanitation that morphed into environmental services. Pinker argues that the euphemism treadmill is an indication that the concepts affect language rather than the other way around. Making a new, friendlier word or phrase for a concept does not change the way you think of the concept. Instead, over time, the word or phrase takes on the negative connotation of the concept. According to Pinker, we will know when we have achieved equality and mutual respect for one another when the names for minorities no longer change. There is no denying that language and thought are intertwined. The research on linguistic relativity and on how wording choices affect thought makes this clear. The interesting question that needs to be addressed going forward is the degree to which language influences thought. As it stands now, we tend to make rather unsatisfying claims such as "language affects thought." Only time will tell how strong that effect is.

cognition

Cognition can be thought of as the mental processes used in receiving, manipulating, and communicating information. There are many areas that are of interest to cognitive psychologists. Some of these, such as how we recognize, store, and retrieve information have been covered in previous chapters.

Decision making

Decision making is the process of evaluating a number of alternatives and making a choice between them. Sometimes our decisions involve making judgments about how likely we think some event is. For example, you make a judgment that your psychology professor is going to give a pop quiz tomorrow, so you make the decision to reread the material that was assigned. Every day we are faced with judgments and decisions that we must make. You decide what kind of car to buy, house to buy, what TV show to watch at night, etc. You even made a decision to read this book. That was a good choice by the way. Some of the choices we make like deciding what to have for dinner have little impact on our life, but some choices, like deciding what to major in, change our life forever. Ideally, we would always make the best decisions, but this is not the case. One of the themes you will see in this section is that decision making is often affected by much more than what is rational. The French Noble Prize winning author Albert Camus once remarked that "Life is the sum of all your choices." Tragically, Camus died young at the age of 46 in a car crash. In his pocket was an unused train ticket. He had intended to take the train that day, but at the last minute made the decision to ride in the car with a friend. Of course, Camus could not have known that his life hung on the choice between taking the train and riding in the car, but it illustrates another important aspect of decision making. Many times, we have to make decisions under uncertainty. We often do not know what the best choice is when making a decision, because we do not know what will happen based on the different choices we could make. We will start off this section by looking at some of the basic strategies that are used in decision making. Then we will look at judgment and decision making involving probabilities. We will conclude the section by covering some biases that affect judgment and decision making. Although I cannot promise you that reading this will help you always make the best decisions, you will at least gain an appreciation for the factors that go into judgment and decision making and understand how to more carefully evaluate the alternatives next time you have an important decision to make.

language- what is it

Did you know that when you use language, for example having a conversation with a friend, you are causing a chemical reaction in the brain of the person you are conversing with? FIGURE 8.2 The symbols used in language have an arbitrary relationship to what they represent. There is nothing about the letters d o g that help you understand the meaning of dog. A large part of language development is learning an association between the words and the thing they represent. Image © Stuart Monk. Used under license from Shutterstock, Inc. Language can be thought of as the ability to use symbols to convey meaning. Any language can be reduced down to a set of finite Symbols. The symbols that are used in language have an arbitrary relationship with the thing they represent.

In order to study the mind scientifically, there are three fundamental assumptions made by cognitive psychologists..

First, cognitive psychologists argue that we represent the world with codes or symbols. Second, we have rules for manipulating these symbols. Third, by giving the right stimuli to participants and noting their responses, we can infer the symbols and rules used by the cognitive system. As an example of this, as quickly as possible say out loud the following words: mint, lint, squint, flint, hint, stint, tint, pint. Did you have trouble? Many people reading this list of words will slow down or even mispronounce the word pint. Pint is what is referred to as an irregular word, meaning that its pronunciation does not follow the rules of English. If you applied the rules of English, you would get a pronunciation that rhymed with mint. Having people read a list of words that follow the rule (e.g., mint, lint, flint, etc.), causes them to misapply the rule and mispronounce pint. So, by noting the relationship between the words (stimuli) and pronunciations (responses) cognitive psychologists are able to better understand how your cognitive system is able to read words. The assumption that we can infer the symbols and rules used by the cognitive system by noting the relationship between stimuli and response is at the heart of cognitive research.

Noam Chomsky- surface structure and deep structure

Further refining his theory, Chomsky also introduced the concepts of Surface Structure and Deep Structure. The deep structure referred to the underlying meaning of the sentence, and the surface structure was the actual utterance. Chomsky believed that every sentence had both a deep structure and a surface structure, and that transformational rules transformed deep structure into surface structure. So, "The beer was drunk by the woman" and "The woman drank the beer" would be two surface structures that are derived from a common deep structure. Another way of thinking about it is that deep structure specifies the semantic aspect of the sentence and the surface structure specifies the phonological component. One major advantage of including the distinction between surface and deep structure was that he could explain ambiguous sentences that could not be explained by phrase structure rules. Consider the classic sentence "The shooting of the hunters was terrible." Who is doing the shooting? If you stop and think about it, there are two possible meanings or deep structures to this sentence. One deep structure would indicate that the marksmanship of the hunters is terrible. The other deep structure would be that someone shot the hunters and that was terrible. This example shows that it is possible to have one surface structure that can map onto two different deep structures. Importantly, the two possible meanings have the same tree diagram. This means the rules of the syntax are not enough to resolve the ambiguity. Ambiguities like that we have seen in this section occur often in language, but we resolve these ambiguities so effortlessly, we are generally not aware there was an ambiguity in the first place. How then do we resolve these ambiguities? One important way is through the use of pragmatics or context. Often we do not even notice the ambiguity because the context of the sentence makes it clear what the intended meaning is. Let's look at an example given by Chomsky, "Visiting relatives can be a nuisance." Do you see the two meanings of the sentence? It could be mean that visiting relatives is a nuisance or that visiting relatives are a nuisance. If my wife and I were driving to my uncle's house, and I said to my wife, "Visiting relatives can be a nuisance" she would interpret the sentence as having the visiting relatives is a nuisance deep structure. Most likely she would not even be aware that there was an ambiguity in the sentence as the context is able to quickly help her select the appropriate meaning. This example also highlights the fact that comprehending language requires more than just an understanding of the language itself. It also includes our understanding of the environment in which the language is used. If I said the same thing to my wife when we were sitting at our house waiting for my uncle to come over, her interpretation would have been very different.

Framing Effects

How something is presented can have a significant impact on how people respond to it. Tversky and Kahneman showed this clearly by asking subjects to consider the following scenario: Imagine that the US is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows: Program A: If Program A is adopted, 200 people will be saved Program B: If Program B is adopted, there is 1/3 probability that 600 people will be saved, and 2/3 probability that no people will be saved. When presented with this version of the problem and asked which of the two programs they favored, the majority of subjects chose the sure thing, Program A. When a different group of subjects were given the same scenario but with the options below the outcome was different. Program C: If Program C is adopted 400 people will die. Program D: If Program D is adopted there is 1/3 probability that nobody will die, and 2/3 probability that 600 people will die. In this case, the majority chose Program D. Now compare Program A and Program C. Notice that they are the same thing. Program A presents the outcome in terms of gains (lives saved), whereas Program C couches the outcome in terms of losses (lives lost). Despite this, they both represent the same outcome. Notice that Program B and Program D are also describing the same outcome, just in different ways. Coming to different decisions based on how the outcomes are presented, or framed, is known as the Framing Effect. As other examples, researchers have shown that people give higher ratings to ground beef described as 75% lean as opposed to 25% fat, and they are more favorable towards measures describing a 95% employment rate versus a 5% unemployment rate. In some instances, the framing effect can be understood in terms of whether the outcomes are framed as gains or losses. This is the case in the disease problem given above. As Tversky and Kahneman have shown, people prefer to avoid losses more that they prefer to make gains. Because decision makers hate to incur a loss, they exhibit risk aversion when presented with gains. They take the sure gain rather than taking a chance on a loss. In the first version of the disease problem, the two outcomes are framed in terms of gains, and when framed in terms of gains, people chose the sure gain of Program A because they were risk averse. In the second version of the problem, the two outcomes are framed as losses. Here, Program C is rejected because of loss aversion and the majority become risk seeking and choose Program D. This indicates that people are risk averse when the question is framed as a gain, but are risk seeking when it is framed as a loss.

Graphemes

In an alphabetic system, the written units, called Graphemes, map onto phonemes. Let's look at an example in English. The word cat consists of three phonemes. The three graphemes c, a, t map onto the three sounds /k/, /a/, /t/. So, for the word cat each letter is a grapheme. In some words, the number of phonemes is less than the number of letters, like in the word bait. For bait, the graphemes b, ai, t map onto the phonemes /b/, /ā/, /t/. The grapheme ai is what we call a Complex Grapheme because it consists of more than one letter.

Scientific Attitude toward cognition- the shift

In the United States, the scientific attitude toward cognition started to shift during the 1950s. During this time, many psychologists became dissatisfied with behaviorism and started to argue that if we want to understand humans, we need to open the black box. We need to understand what the mind is and emphasize that any account of human functioning that does not include the mind is incomplete. Fortunately, there were many researchers interested in studying the mind, and they started getting together to discuss their research at conferences. It was at an MIT conference in September of 1956 that many place the birth of cognitive psychology. At this conference, the black box was officially opened, never to be closed again. During this conference, there were a number of seminal talks that would change the world of psychology forever. Many point to three talks as being especially important. First, Herbert Simon and Allen Newell presented research on some of the first artificial intelligence work ever done. Second, Noam Chomsky presented a model of language that directly challenged behaviorism. Third, George Miller presented his groundbreaking research on short-term memory and the magical number seven plus or minus two. The theme that arose from the conference was that it was possible to scientifically study the mind.

What we should do vs what we do expected value

It probably won't come as a surprise that much of the decision making research has its roots in economics. After all, economists, particularly behavioral economists, are very interested in what makes consumers decide to buy a certain product. One model that economists use to understand decision making is Expected Value. To calculate the expected value of an event you multiply each of the possible outcome values by their probabilities and sum them up. An example will help make this clearer. If I told you that I have a dice game where you have to pay $5 to play but if you role any number of your choosing on one role of a six-sided die I will give you $25, would you play? Expected value gives an easy way to determine if you will gain money, lose money, or breakeven over the long haul. For my dice game there are only two outcome values we need to consider, the value of a win, V(W), and the value of a loss, V(L). The probabilities are the probability of a win, P(W), and the probability of a loss, P(L). The formula for expected value is given below see photos Expected value is a type of Normative Model that prescribes what one should do in a given situation. Obviously, though, people do not always do what a normative model like expected value suggests. Go into any casino and you will see a room full of people violating expected value. Other examples where people behave differently than predicted by expected value are life insurance and extended warranties on television sets. Many people buy these despite the fact that they both have negative expected values. I'm certainly hoping that my family loses money on my life insurance policy! To explain why people play games of chance and buy insurance Descriptive Models are useful because they are designed to describe what people actually do, rather than what they should do

Parallel Distributed Processsing (PDP)

Let's consider again what a child learns when trying to understand how to form the past tense of a verb in the English language. As we have seen, one way may be that, after sufficient exposure to the language, the child forms a rule that says add -ed to a verb to make it past tense. As intuitive as this explanation is, there are researchers that convincingly argue that children never learn a formal rule. Rather, they point to the statistical structure of language and claim that children use this structure to learn the language. By their account, the critical period of language development is critical because during this time children are especially able to exploit the statistical structure of language. To support the statistical structure view, researchers have shown that computer models called Parallel Distributed Processing (PDP) are able to learn the past tense of English words (Figure 8.9). PDP models consist of simple processing units or nodes that are connected to each other and that pass activation to one another. In this sense, the units in a PDP model can be likened to the neurons in the brain. Also like neurons in the brain, the units in a PDP model can send excitatory or inhibitory activation to other units. Learning in a PDP model occurs as the pattern of activation between units is adjusted due to experience. Suppose the model has learned that walked, talked, and placed are all past tense. When presented with a new input like go the model would most likely produce goed as the output. Of course, this is wrong, and the model would be instructed that the proper word was went. It may take a few presentations for the model to learn, but over time the connections between units would be adjusted such that went is produced as the past tense of go. The important point is that the model never learns a rule. It learns to produce the regular instances like talked and the irregular instances like went by simply adjusting the activation flow between its processing units. Importantly, these adjustments are made based on the statistical structure that is inherent in the language. Modeling endeavors along these lines have provided real challenges to the strictly rule based account of language learning. Other evidence that children learn based on the statistical structure of the language comes from studies investigating how infants learn word boundaries. The problem comes from the fact that in spoken language there is not a clear marker for where one word ends and another begins (Figure 8.10). How then do children learn to break the speech stream into words? Work by Jenny Saffran and colleagues indicate that infants can use the statistical structure of the language to help segment the speech stream. For example, in the utterance pretty baby there are four syllables, pre, ty, ba, and by. The syllable pre is followed by only a few syllables in the English language. Examples include ty as in pretty and side as in preside. Because pre is followed by a limited number of syllables, the probability that pre will be followed by ty is high. We call the probability that one syllable will follow another, the Transitional Probability. On the other hand, the probability that ty will be followed by ba is extremely low. This is because ty is the last syllable in the word and could be followed by any syllable. As it turns out, within a language the transitional probability is much lower between the syllables of adjacent words (tyba) than it is between syllables within a word (pretty). The fact that the transitional probability for tyba is lower than pretty, gives the infant a clue that the boundary between words occurs between the ty and ba syllables and that pretty is a word but that tyba is not. Later research by Gary Marcus and colleagues showed that infants are still able to learn the boundaries between words when the transitional probability does not offer a way of segmenting the speech stream. Instead, they claim that infants segment the speech stream by learning rules that help figure out where one word ends and the next begins. On balance, it is probably not whether the developing child uses only rules or only statistical structure to learn language. There are convincing data on both sides of the issue, and the safest conclusion seems to be that the budding language learner has more than one tool in their toolbox.

Linguistics and Psycholinguistics

Linguistics is the study of language itself. The goal is to understand the language. Psycholinguistics is the study of the psychological processes used during language processing. The reason I'm highlighting this distinction is because linguistic evidence that there is a certain rule structure for a language does not mean that those rules are used when we process that language. Clearly, people like Chomsky believe that we learn the rules of the language, and without question, the view that "language learning is rule learning" has been a driving force in understanding language development. Nevertheless, there are those that question its usefulness.


संबंधित स्टडी सेट्स

PrepU Chp 28: Assessment of Hematologic Function and Treatment Modalities

View Set

Clinical Level Final Mock Exam I

View Set

NRS 131.0 - Unit 8A - Prenatal Nursing

View Set

Chapter 18 - Treasury Policies and Procedures

View Set

NCIDQ, Ch. 4 - Sustainable Design

View Set