Chapter 7 [S&P]
The Ecological Approach to Perception
Focuses on studying moving observers and on determining how their movement creates perceptual information that both guides further movement and helps observers perceive the environment. -->During World War II, J. J. Gibson studied the kind of perceptual information that airplane pilots use when coming in for a landing. -->From the 1950s - 1980s, perception research carried out by having stationary observers look at stimuli. Gibson's stated that this way of studying perception couldn't explain perception as experienced by moving observers, such as pilots/walking. The correct approach, suggested Gibson, was to study how people perceive as they move through the environment.
Optic Ataxia
Having trouble pointing to visual stimuli
Wayfinding
We often travel to destinations we can't see from the starting point, such as when we walk across campus from one class to another or drive to a destination several miles away. This kind of navigation, in which we take a route that involves making turns, is called wayfinding.
Optic Flow
-->Optic flow provides information about how rapidly we are moving and where we are headed. Looking at road on motorcycle. Optic flow has two characteristics: 1. *Gradient of Flow:* Optic flow is more rapid near the moving observer and slower farther away—is called the gradient of flow. According to Gibson, the gradient of flow provides information about how fast the observer is moving. 2. Focus of Expansion (FOE): There is no flow at the destination toward which the observer is moving. Image of optic flow lines for an airplane coming in for a landing, the FOE is indicated by a red arrow.
Think About It
1. It is a common observation that people tend to slow down as they are driving through long tunnels. Explain the possible role of optic flow in this situation. (p. 154) 2. We have seen that gymnasts appear to take visual information into account as they are in the act of executing a somersault. In the sport of synchronized diving, two people execute a dive simultaneously from two side-by-side diving boards. They are judged based on how well they execute the dive and how well the two divers are synchronized with each other. What environmental stimuli do you think synchronized divers need to take into account in order to be successful? (p. 155) 3. Can you identify specific environmental information that you use to help you carry out actions in the environment? This question is often particularly relevant to athletes. 4. If mirror neurons do signal intentions, what does that say about the role of top-down and bottom-up processing in determining the response of mirror neurons? (p. 166) 5. How do you think the response of your mirror neurons might be affected by how well you know a person whose actions you were observing? (p. 166) 6. How does your experience in interacting with the envi- ronment (climbing hills, playing sports) correspond or not correspond to the findings of the "potential for action" experiments described in the Something to Consider section? (p. 169)
Test Yourself II
1. What is an affordance? Describe the results of the experiments on patient M.P. that illustrates the operation of affordances. 2. Describe the early experiments that showed that there are neurons in the parietal cortex that respond to goal-directed reaching. 3. How does the idea of what (ventral) and how (dorsal) streams help us describe an action such as reaching for a coffee cup? 4. Describe Fattori et al.'s experiments on "grasping neurons" and "visuomotor grip cells." 5. What is the parietal reach region? 6. Describe the experiment on optic ataxia patients that shows that the dorsal stream is involved in helping to avoid obstacles. 7. What are mirror neurons? What is the evidence that mirror neurons aren't just responding to a specific pattern of motion? 8. Describe Iacoboni's experiment that suggested that there are mirror neurons that respond to intentions. 9. What is a possible mechanism that might be involved in mirror neurons that respond to intentions? 10. What are some of the proposed functions of mirror neurons? What is the scientific status of these functions? 11. Describe the action-based account of perception. In your discussion, indicate (a) why some researchers think the brain evolved to enable us to take action; (b) how experiments have demonstrated a link between perception and "ability to act."
Test Yourself I
1. What two factors does the ecological approach to perception emphasize? 2. What is optic flow? What are two characteristics of optic flow? 3. What is invariant information? How is invariance related to optic flow? 4. What is observer-produced information? Describe its role in somersaulting and why there is a difference between novices and experts when they close their eyes. 5. Describe the swinging room experiments. What principles do they illustrate? 6. What is the evidence (a) that optic flow provides information for the direction someone is heading and (b) that there are neurons that respond to optic flow? 7. What does research on driving a car and walking tell us about how optic flow may (or may not) be used in navigation? What are some other sources of information for navigation? 8. What is wayfinding? Describe the research of Hamid et al. (computer maze) and Schinazi and Epstein (walking on the Penn campus) that investigated the role of landmarks in wayfinding. 9. What do the brain scanning experiments of Schinazi and Epstein (measuring responses to buildings on the Penn campus) and Janzen and van Turennout (measuring activation when navigating a virtual museum) indicate about brain activity and landmarks? 10. Describe the case studies of patients with damage to their RSP and hippocampus. What conclusions about the function of these structures were reached from these observations? 11. What does it mean to say that wayfinding is "multifaceted"?
Directional Ability
Ability to determine the direction of any familiar destination with respect to current position, and ability to use directional information provided by familiar landmarks.
The Effect of Brain Damage on Wayfinding
Ability to navigate through the environment is affected by damage to various brain structures. Focus on two structures that have been shown to be involved in navigation; The retrosplenial cortex and the Hippocampus *Retrosplenial Cortex* --> 55-year-old taxi driver / Unable to find way home. Could recognize buildings & knew where he was. --> Hospital tests revealed damage to Retrosplenial Cortex. He could identify buildings / common objects and was able to remember the positions of objects in a room, couldn't describe or draw routes between his house and familiar places or draw the layout of his house. Lost his *directional ability.* --> Other Case: 70-year-old retired schoolteacher / unable to determine the viewpoints from which photographs of familiar places were taken. *Hippocampus Damage* -->Patient T.T. had been a London taxi driver for 37 years / contracted a severe case of encephalitis that damaged his hippocampus. Now unable to find his way around his own neighborhood. Tested on his ability to drive from one place to another in London by navigating a car in an interactive computer game called "The Getaway," which accurately depicted the streets of central London. T.T. was able to do this as well as control subjects, a group of retired London taxi drivers, but only if the route involved just main roads. As soon as it was necessary to navigate along side streets, T.T. became lost, even though he had been taking people on taxi rides though the same side streets for 37 years. Eleanor Maguire and coworkers *concluded that the hippocampus is important for accessing details of routes that were learned long ago.* The research we have described on how the brain is involved in wayfinding has focused on three structures: the parahippocampal gyrus, the retrosplenial cortex, and the hippocampus. Physiological research studying the behavior of patients with brain damage and analysis of the results of brain scanning experiments have also identified a number of other brain areas involved in various components of wayfinding. The important message of all of these studies, taken together, is that *wayfinding is distributed throughout many structures in the brain. This isn't surprising when we consider that wayfinding involves seeing and recognizing objects along a route (perception), paying attention to specific objects (attention), using information stored from past trips through the environment (memory), and combining all this information to create maps that help us relate what we are perceiving to where we are now and where we need to go next.*
The Physiology of Reaching and Grasping
An important breakthrough in the study of the physiology of reaching and grasping came with the discovery of ventral (or what) and dorsal (or where/how) pathways that we described in Chapter 4 (see Figure 4.14).
Walking / Visual Direction Strategy
An important strategy used by walkers (and perhaps drivers as well) that does not involve optic flow is the visual direction strategy, in which observers keep their body pointed toward a target. If they go off course, the target will drift to the left or right. --> Walker can correct course by recentering the target --> Another indication that flow information is not always necessary for navigation is that we can find our way even when flow information is minimal, such as at night or in a snowstorm --> Jack Loomis and coworkers have demonstrated this by eliminating flow altogether, with a "blind walking" procedure in which people observe a target object located up to 12 meters away, then walk to the target with their eyes closed. These experiments show that people are able to walk directly toward the target and stop within a fraction of a meter of it people can do this even when they are asked to walk off to the side first and then make a turn and walk to the target, while keeping their eyes closed. *Shows that we are able to accurately navigate short distances in the absence of any visual stimulation at all.*
Self-Produced Information
Another idea of the ecological approach -->When a person makes a movement, it creates information, and this information is, in turn, used to guide further movement -->For example, when a person is driving down the street, movement of the car provides flow information, and the observer then uses this flow information to help steer the car in the right direction. Another example of movement that creates information that is used to guide further movement is provided by somersaulting. -->Benoit Bardy and Makel Laurent (1998) found that expert gymnasts performed somersaults better with their eyes open. -->When their eyes were open, the gymnasts appeared to be making in-the-air corrections to their trajectory. For example, a gymnast who initiated the extension of his or her body a little too late compensated by performing the rest of the movement more rapidly. Another interesting result was that closing the eyes did not affect the performance of novice somersaulters as much as it affected the performance of experts. Apparently, experts learn to coordinate their movements with their perceptions, but novices have not yet learned to do this. *Thus, somersaulting, like driving a car or piloting an airplane, involves using information created by movement to guide further movement.*
Parietal Reach Region [Monkey Experiment]
Carried out by Patrizia Fattori and coworkers Figure 7.19: (1) The monkey observes a small fixation light in the dark; (2) lights are turned on for half a second to reveal the object to be grasped; (3) the lights go out and then, after a brief pause, the fixation light changes color, signaling that the monkey should reach for the object. -->Key part of this sequence occurs when the monkey reaches in dark. Monkey knows what the object is from seeing it when the lights were on, so while it is reaching for the object in the dark, it adjusts its grip to match the object. A number of different objects were used, as shown in Figure 7.19b, each of which required a different grip. -->The key result of the experiment is that there are *neurons that respond best to specific grips.* For example, neuron A in Figure 7.20 responds best to "whole hand prehension" whereas neuron B responds best to "advanced precision grip." There are also neurons, like C, that respond to a number of different grips. Remember that when these neurons were firing, the *monkey was reaching for the object in the dark, so the firing reflected not perception but the monkey's actions.*
Importance of Vision in Regards to Balance Study
David Lee and Eric Aronson --> Placed 13- to 16-month-olds in a "swinging room" (Figure 7.5). Floor was stationary, walls and ceiling could swing toward and away from the toddler. --> 26 percent swayed, 23 percent staggered, and 33 percent fell down, remainder were not affected, even though the floor remained stationary throughout the entire experiment! Adults were affected as well. -->Oscillating the room as little as 6 mm caused adult subjects to sway approximately in phase with this movement. Adults who didn't brace themselves could, like the toddlers, be knocked over by their perception of the moving room. --> Show that vision is such a powerful determinant of balance that it can override the traditional sources of balance information provided by the inner ear and the receptors in the muscles and joints -->Emphasis on (1) the moving observer, (2) identifying invariant (never changing) information in the environment that observers use for perception, and (3) considering the senses as working together was revolutionary for its time. Even though perception researchers were aware of Gibson's ideas, most research continued in the traditional way—testing stationary subjects looking at stimuli in laboratory settings. -->Today perception in naturalistic settings is one of the major themes of perception research.
Mirroring Others' Actions in the Brain
Early 1990s, Giacomo Rizzolatti and coworkers were investigating how neurons in the monkey's premotor cortex fired as the monkey performed actions like picking up a toy or a piece of food. Goal was to determine how neurons fired as the monkey carried out specific actions. But as sometimes happens in science, they observed something they didn't expect. When one of the experimenters picked up a piece of food while the monkey was watching, neurons in the monkey's cortex fired. What was so unexpected was that the neurons that fired to observing the experimenter pick up the food were the same ones that had fired earlier when the monkey had itself picked up the food. -->Led to the discovery of mirror neurons! Just looking at the food causes no response, and watching the experimenter grasp the food with a pair of pliers, as in Figure 7.22c, causes only a small response
Neuroscience Research on Optic Flow
Figure 7.8 --> Shows a neuron within the monkeys Medial Superior Temporal Area that responds best to a pattern of dots expanding outward. Would occur if the monkey were moving forward --> [B] Another neuron that responds best to circular motions, as would occur if the monkey were swinging through the trees --> What does the existence of these optic flow neurons mean? We know from previous discussions that finding a neuron that responds to a specific stimulus is only the first step in determining whether this neuron has anything to do with perceiving that stimulus (see Chapter 3, page 66). The next step is to demonstrate a connection between the neuron's response and behavior.
Visuomotor Grip Cells
Follow up study conducted on the same monkeys as above. --> Fattori and coworkers (2012) discovered neurons that responded not only when a monkey was preparing to grasp a specific object, but also when the monkey viewed that specific object. An example of this type of neuron, which Fattori calls visuomotor grip cells, is a neuron that initially responds when the monkey sees a specific object, and then also responds as the monkey is forming its hand to grasp the same object. This type of neuron is therefore involved in both perception (identifying the object by seeing) and action (reaching for the object and gripping it with the hand).
Audiovisual Mirror Neurons
Further evidence that mirror neurons are doing more than just responding to a particular pattern of motion is the discovery of neurons that respond to sounds that are associated with actions. These neurons in the premotor cortex, called audiovisual mirror neurons, respond when a monkey performs a hand action and when it hears the sound associated with this action. For example, the results in Figure 7.23 show the response of a neuron that fires (a) when the monkey sees and hears the experimenter break a peanut, (b) when the monkey just sees the experimenter break the peanut, (c) when the monkey just hears the sound of the breaking peanut, and (d) when the monkey breaks the peanut. What this means is that just hearing a peanut breaking or just seeing a peanut being broken causes activity that is also associated with the perceiver's action of breaking a peanut. These neurons are responding, therefore, to what is "happening"—breaking a peanut—rather than to a specific pattern of movement.
The Senses Do Not Work in Isolation
Gibson believed that rather than considering vision, hearing, touch, smell, and taste as separated senses, we should consider how each one provides information for the same behaviors. One example of how a behavior originally thought to be the exclusive responsibility of one sense is also served by another one is provided by the sense of balance. --> Ability to stand up straight depends on systems that enable you to sense the position of your body. Systems include the vestibular canals of your inner ear and receptors in the joints and muscles. Gibson argued that vision provides a frame of reference that helps the muscles constantly make adjustments to help maintain balance.
Do Observers Use Optic Flow Information?
Gibson proposed that optic flow provides information about where a moving observer is heading. But can observers actually use this information? --> Asked observers to make judgments where they are heading based on computer-generated displays of moving dots that create optic flow stimuli. -->Relative to a reference point such as the vertical line in Figures 7.6a and b. A indicates movement directly toward the line, B indicates movement to the right of the line. Observers viewing stimuli such as this can judge where they are heading relative to the vertical line to within about 0.5 to 1 degree -->Psychophysical results such as these support Gibson's idea that optic flow provides information about where a person is heading. --> Researchers identified neurons that respond to flow patterns in the medial superior temporal area.
Affordances: What Objects Are Used For
Gibson's ecological approach involves identifying information in the environment that is useful for perception. Described optic flow, which is created by movement of the observer. Another type of information Gibson specified is *affordances—information that indicates what an object is used for.* -->What this means is that perception of an object not only includes physical properties, such as shape, size, color, and orientation, that might enable us to recognize the object; our perception also includes information about how the object is used. -->For example, when you look at a cup, you might receive information indicating that it is "a round white coffee cup, about 5 inches high, with a handle," but your perceptual system would also respond with information indicating "you can pick the cup up" and "you can pour liquid into it." Information such as this goes beyond simply seeing or recognizing the cup; it provides information that can guide our actions toward it. Another way of saying this is that "potential for action" is part of our perception of an object.
Research on Affordances
Glyn Humphreys and Jane Riddoch (2001) studied affordances by testing patient M.P., who had damage to his temporal lobe that impaired his ability to name objects. Given a cue, either (1) the name of an object ("cup") or (2) an indication of the object's function ("an item you could drink from"). Then shown 10 different objects and was told to press a key as soon as he found the object. Results of this testing showed that M.P. identified the object more accurately and rapidly when given the cue that referred to the object's function. Humphreys and Riddoch concluded from this result that M.P. was using his knowledge of an object's affordances to help find it. Although M.P. wasn't reaching for these objects, it is likely that he would be able to use the information about an object's function to help him take action with respect to the object. In line with this idea, there are other patients with temporal lobe damage who cannot name objects, or even describe how they can be used, but who can pick them up and use them nonetheless.
Invariant Information
Important Concept of the Ecological Approach --> Information that remains constant even when the observer is moving. -->Optic flow provides invariant information because flow information is present as long as the observer is moving through the environment. Of course, as the observer moves through a scene, the flow might look different—houses flow past on a city street, and trees on a country road but flow is still there. -->The FOE is another invariant property because it always occurs at the point toward which the observer is moving. If an observer changes direction, the FOE shifts to a new location, but the FOE is still there. Thus, even when specific aspects of a scene change, flow and the FOE continue to provide information about how fast a person is moving and where he or she is heading. When we consider depth perception in Chapter 11, we will see that Gibson proposed other sources of invariant information that indicate an object's size and its distance from the observer.
Landmarks
Important Source for Wayfinding --> Objects on the route that serve as cues to indicate where to turn -->Sahar Hamid and coworkers (2010) studied how subjects used landmarks as they learned to navigate through a digital mazelike environment. Subjects first navigated through the maze until they learned its layout. Then, were told to travel from one location in the maze to another (testing phase). During both the training and testing phases, subjects' eye movements were measured using a head-mounted eye tracker. This maze contained both decision-point landmarks and non-decision-point landmarks. Subjects spent more time looking at decision-point landmarks than at non-decision-point landmarks, probably because the decision-point landmarks were more important for navigating the maze. In fact, when maze performance was tested with half of the landmarks removed, removing landmarks that had been viewed less (and were likely to be in the middle of the corridors) had little effect on performance. However, removing landmarks that observers had looked at longer caused a substantial drop in performance. It makes sense that landmarks that are looked at the most would be the ones that are used to guide navigation. Another study, in which subjects learned a walking route through the University of Pennsylvania campus, showed that after subjects had learned the route, they were more likely to recognize pictures of buildings that were located at decision points than those located in the middle of the block.
Motor Neurons Help us Understand
In addition to proposing that mirror neurons signal what is happening as well as the intentions behind various actions, researchers have also proposed that mirror neurons help us understand (1) communications based on facial expressions (2) gestures used while speaking (3) the meanings of sentences (4) differences between ourselves and others. As might be expected from this list, it has also been proposed that mirror neurons play an important role in guiding social interactions.
Optic Flow Neurons & Behavior
Kenneth Britten and Richard van Wezel (2002) demonstrated a connection between the response of neurons in MST and behavior --> Trained monkeys to indicate whether the flow of dots on a computer screen indicated movement to the left or right of straight ahead. --> When the monkey was making its judgment, researchers would electrically stimulate MST neurons that were tuned to respond to flow associated with movement to the left, the monkey's judgment was shifted even more to the left, increasing from 60 percent to 80 percent of the trials. *This demonstration that stimulating flow neurons can influence the monkey's judgment of the direction of movement supports the idea that flow neurons can, in fact, help determine the direction of perceived movement.*
Influence of Intention on Mirror Neurons Research
Mario Iacoboni and coworkers (2005) provide this evidence in an experiment in which they measured subjects' brain activity as they watched short film clips represented by the stills in Figure 7.24. -->Two Intention films, both show a hand picking up a cup, although important difference. Top panel, table is neat, food is untouched, and cup is full of tea. Bottom panel, table is a mess, food has been eaten, and cup appears to be empty. Iacoboni hypothesizes that it is likely that top film would lead the viewer to infer that the person picking up the cup intends to drink from it, and the bottom film would lead the viewer to infer that the person is cleaning up. --> Also viewed 2 control conditions. The Context film showed the table setting, and the Action film showed the hand reaching in to pick up an isolated cup. *They contained the visual elements of the intention films, but didn't suggest a particular intention.* --> Iacoboni found that the Intention films caused greater activity than the control films in areas of the brain known to have mirror neuron properties. (Figure 7.25) The amount of activity was least in the Action condition, was higher for the Cleaning Up condition, and was highest for the Drinking condition. *Based on the increased activity for the two Intention conditions, Iacoboni concluded that the mirror neuron area is involved with understanding the intentions behind the actions shown in the films. He reasoned that if the mirror neurons were just signaling the action of picking up the cup, then a similar response would occur regardless of whether a context surrounding the cup was present.* -->Mirror neurons code the "why" of actions and respond differently to different intentions. -->If mirror neurons do, in fact, signal intentions, how do they do it? One possibility is that the response of these neurons is determined by the chain of motor activities that could be expected to happen in a particular context. For example, when a person picks up a cup with the intention of drinking, the next expected actions would be to bring the cup to the mouth and then to drink some coffee. However, if the intention is to clean up, the expected action might be to carry the cup over to the sink. According to this idea, mirror neurons that respond to different intentions are responding to the action that is happening plus the sequence of actions that is most likely to follow, given the context.
Developmental Study: Bennett Berthenthal and Coworkers
Showed that infants as young as 4 months old sway back and forth in response to movements of a room and that the coupling of the room's movement and the swaying becomes closer with age.
Driving a Car
Michael Land and David Lee --> Study information people use to stay on course when driving --> Fitted car with device to record; angle of steering wheel, speed, and measured where driver was looking. -->According to Gibson, the focus of expansion (FOE) provides information about the place toward which a moving observer is headed. However, Land and Lee found that although drivers look straight ahead while driving, they tend to look at a spot in front of the car rather than looking directly at the FOE --> Land and Lee studied where drivers look when they negotiating a curve. Task poses a problem for the idea of FOE because the driver's destination keeps changing as the car rounds the curve. Drivers don't look directly at the road, but instead look at the tangent point of the curve on the side of the road, as shown in Figure 7.10b. Because drivers don't look at the FOE, which would be in the road directly ahead, it's suggested that drivers use information in addition to optic flow to determine the direction they are heading. An example of this additional information would be noting the position of the car relative to the lines in the center of the road or relative to the side of the road.
Mirror Neurons
Neurons that respond both when a monkey observes someone else grasping an object such as food on a tray, and when they monkey itself grabs the food. -->Most are specialized to respond to only one type of action, such as grasping or placing an object somewhere. Might think that the monkey may have been responding to the anticipation of receiving food, the type of object made little difference. The neurons responded just as well when the monkey observed the experimenter pick up an object that was not food. -->Could the mirror neurons simply be responding to the pattern of motion? The fact that the neuron does not respond when watching the experimenter pick up the food with pliers argues against this idea.
Decision-Point Landmarks
Objects at corners where the subject had to decide which direction to turn
Non-Decision-Point Landmarks
Objects located in the middle of corridors that provided no information about how to navigate.
Avoiding Other Objects When Reaching
Obstacle avoidance is also controlled by the parietal regions responsible for reaching was demonstrated in an experiment by Igor Schindler and coworkers. Tested two patients with parietal lobe damage who had optic ataxia. These ataxia patients and a group of normal control subjects were presented with two cylinders, separated by 8 to 10 inches (Figure 7.21a). Their task was to reach between the two cylinders and touch anywhere on a gray strip located 20 cm behind the cylinders. The cylinders were moved to different positions. (7.21b) -->Arrows indicate where subject's hand passed between the cylinders as they reached to touch the strip. Notice that the control subjects (red arrows) changed their reach in response to changes in the cylinders' position. In contrast, the reach of the ataxia patients was the same for all arrangements of the cylinders, as shown for one of the patients by the blue arrows. In other words, they didn't take account of the varying locations of the obstacles. *Schindler concludes from this result that the dorsal stream, which was damaged in the ataxia patients, not only provides guidance as we reach toward an object but also guides us away from potential obstacles.*
The Parietal Reach Region
One of the most important areas of the brain for reaching and grasping is the parietal lobe at the end of the dorsal pathway -->Areas in the monkey and human parietal cortex that are involved in reaching for objects have been called the parietal reach region (PRR). / Contains neurons that control not only grasping but also reaching. Recent evidence suggests that there are a number of different parietal reach regions in the human parietal lobe, and recording from single neurons in a monkey's parietal lobe has revealed neurons in an area next to the parietal reach region that respond to specific types of hand grips
Neuroscience Research on Landmarks
Participants were shown pictures of buildings while in an fMRI scanner, the brain response in areas of the brain known to be associated with navigation, such as the parahippocampal gyrus (see Figure 7.14), was larger than the response to non-decision-point buildings. Thus, decision-point landmarks are not only more likely to be recognized, but they generate greater levels of brain activity. -->In another brain scanning experiment, Janzen and van Turennout (2004) had observers first study a film sequence that moved through a "virtual museum" (Figure 7.15). Observers were told that they needed to learn their way around the museum well enough to be able to guide a tour through it. Objects were located along the hallway. Decision-point objects marked a place where it was necessary to make a turn. Non-decision-point objects were located at a place where a decision was not required. -After studying the museum's layout, observers were given a recognition test while in an fMRI scanner. They saw objects that had been in the hallway and some objects they had never seen. Their brain activation was measured in the scanner as they indicated whether they remembered seeing each object. Figure 7.15c indicates activity in the right parahippocampal gyrus for objects the observers had seen as they learned their way through the museum. Objects in which the observers remembered, activation was greater for decision-point. But the most interesting result, indicated by the right pair of bars, was that the advantage for decision-point objects also occurred for objects that were not remembered during the recognition test. *Janzen and van Turennout concluded that the brain automatically distinguishes objects that are used as landmarks to guide navigation.* -->The brain therefore responds not just to the object but also to how relevant that object is for guiding navigation. This means that the next time you are trying to find your way along a route that you have traveled before but aren't totally confident about, activity in your parahippocampal gyrus may automatically be "highlighting" landmarks that indicate when you should continue going straight, or make a right turn or a left turn, even in cases when you may not remember having seen these landmarks before. From both the behavioral and physiological experiments we have described, it is apparent that landmarks play an important role in wayfinding. But there is more to wayfinding than landmarks. Before you begin a trip, you need to know which direction to go, and you probably also have a mental "map" of your route and the surrounding area in your mind. You may not think of route planning as involving a map, especially for routes that are very familiar, but research studying people who have lost the ability to find their way because of damage to the brain shows that identifying landmarks is just one of the abilities needed to find one's way.
The Dorsal and Ventral Pathways
Pt. D.F. had damage to her ventral pathway, which resulted with difficulty recognizing objects or judging their orientation, but she could "mail" an object by placing it through an oriented opening. The idea that there is one processing stream for perceiving objects and another for acting on them helps us understand what is happening when Serena, sitting at the coffee shop after her ride, reaches for her cup of coffee (Figure 7.18). She first identifies the coffee cup among the flowers and other objects on the table (ventral pathway). Once the coffee cup is perceived, she reaches for it, taking into account its location on the table (dorsal pathway). As she reaches, avoiding the flowers, she positions her hand and fingers to grasp the cup (dorsal), taking into account her perception of the cup's handle (ventral). She then lifts the cup with just the right amount of force (dorsal), taking into account her estimate of how heavy it is based on her perception of its fullness (ventral). Thus, reaching and picking up a cup involves continually perceiving the position of the cup, shaping the hand and fingers relative to the cup, and calibrating actions in order to accurately grasp the cup and pick it up without spilling any coffee. Even a seemingly simple action like picking up a coffee cup involves a number of areas of the brain, which coordinate their activity to create perceptions and behaviors.
Predicting People's Intentions
Researchers have proposed that there are mirror neurons that respond not just to what is happening but to why something is happening, or more specifically, to the intention behind what is happening. To understand what this means, let's return to Serena in the coffee shop. As we see her reach for her coffee cup, we might wonder why she is reaching for it. One obvious answer is that she intends to drink some coffee, although if we notice that the cup is empty, we might instead decide that she is going to take the cup back to the counter to get a refill, or if we know that she never drinks more than one cup, we might decide that she is going to place the cup in the used cup bin. Thus, there are a number of different intentions that may be associated with the same action.
"Perception depends on Action"
The idea that the purpose of perception is to enable us to interact with the environment has been taken a step further by researchers who have turned the equation around from "action depends on perception" to "perception depends on action" or "people perceive their environment in terms of their ability to act on it." This last statement, by Jessica Witt, is based on the results of many experiments, some of which involve sports. -->For example, Witt and Dennis Proffitt presented series of circles to softball players / asked them to pick the circle that best corresponded to a softball. When comparing their estimates to their batting averages from the just-completed game, they found that batters who hit well perceived the ball to be bigger than batters who were less successful. -->Another Example: Tennis players who have recently won report that the net is lower and that subjects who were most successful at kicking football field goals estimated the goal posts to be farther apart. -->Field goal experiment is especially interesting because the effect occurred only after they had attempted 10 field goals. Before they began, the estimates of the poor kickers and the good kickers were the same. The sports examples all involved making judgments after doing either well or poorly. This supports the idea that perception can be affected by performance.
Researchers who question whether the perceptual judgments measured in some of the experiments we have described are actually measuring perception.
There are, however, researchers who question whether the perceptual judgments measured in some of the experiments we have described are actually measuring perception. Subjects might be affected, they suggest, by "judgmental bias," caused by their expectations about what they think will happen in a particular situation. For example, Bhalla and Proffitt, who found that people who were not in good physical condition judged hills as being steeper, also found that people who were wearing a heavy backpack judged hills to be steeper. Bhalla and Proffitt interpreted this result as showing that wearing the heavy backpack influenced the person's perception of steepness. Alternative interpretation is that subjects' expectation that hills could appear steeper when carrying something heavy might cause them to say a hill appears steeper when they are wearing a heavy backpack, even though their perception of the hill's steepness was actually not affected. -->This explanation highlights a basic problem in measuring perception in general: Our measurement of perception is based on people's responses, and there is no guarantee that these responses accurately reflect what a person is perceiving. Thus, as pointed out above, there may be some instances in which subjects' responses may reflect not what they are perceiving, but what they think they should be perceiving. Even though some experiments may be open to criticism, it is important to note that there are some experiments that do demonstrate a relationship between a person's ability to act and perception. -->The results of the experiments demonstrating this relationship between ability to act and perception are consistent with J. J. Gibson's idea of affordances, described earlier. Affordances, according to Gibson, are an object's "possibilities for action." Thus, perception of a particular object is determined both by what the object looks like and by the way we might interact with it. This brings us to the following statement by J. J. Gibson, from his final book, The Ecological Approach to Perception: "Perceiving is an achievement of the individual, not an appearance in the theater of his consciousness. It is a keeping-in touch with the world, an experiencing of things, rather than a having of experiences" (p. 239). This statement did not lead to much research when it was proposed, but years later many researchers have embraced the idea that perception is not just "an appearance in the theater of consciousness," but is the first step toward taking action in the environment. In addition, some researchers have gone a step farther and suggested that action, or the potential for action, may affect perception.
Evidence that the response of Mirror Neurons can be influenced by different intentions
To understand what this means, let's return to Serena in the coffee shop. As we see her reach for her coffee cup, we might wonder why she is reaching for it. One obvious answer is that she intends to drink some coffee, although if we notice that the cup is empty, we might instead decide that she is going to take the cup back to the counter to get a refill, or if we know that she never drinks more than one cup, we might decide that she is going to place the cup in the used cup bin. Thus, there are a number of different intentions that may be associated with the same action.
Action-Based Accounts of Perception
Traditional approach to perception focused on how environment is represented in the nervous system and in the perceiver's mind. According to this idea, the purpose of visual perception is to create a representation in the mind of whatever we are looking at/accomplishes vision's purpose of representing the environment. -->Many researchers believe purpose of vision is not to create a representation of what is out there but to guide our actions. We can appreciate the reasoning behind this idea by imagining a situation in which action is important for survival. --->Ex. Monkey foraging for food in the forest. Monkey's color perception enables it to see some orange fruit that stands out against green leaves. The monkey reaches for the fruit and eats it. Of course, seeing (and perhaps smelling) the fruit is crucial, because it makes the monkey aware that the fruit is present. But the second step—reaching for the fruit—is just as important, because the monkey can't live on visual experiences alone. It has to reach for and grab the fruit in order to survive. Although there may be situations—such as looking at paintings in an art gallery or looking out at a misty lake in the morning—when seeing what is out there is an end in itself, *vast majority of our experience involves two-step process: first perceiving an object or scene and then taking action toward the objects or within the scene.* The idea that action is crucial for survival has been described by Mel Goodale (2011) as follows: "Many researchers now understand that brains evolved not to enable us to think (or perceive), but to enable us to move and interact with the world". According to this idea, perception may provide valuable information about the environment, but taking a step beyond perception and acting on this information enables us to survive so we can perceive another day.
What about situations in which the person hasn't carried out any action but has an expectation about how difficult it would be to perform that action?
What if people who were physically fit and people who were not physically fit were asked to estimate the steepness of a hill?Mukul Bhalla and Dennis Proffitt asked people ranging from varsity athletes to people who didn't work out regularly to estimate the slant of steep hills, they found that the least fit people (as measured by heart rate and oxygen consumption during and after exercise) judged the hills as being steeper. The reason for this, according to Bhalla and Proffitt, is that over time people's general fitness level affects their perception of how difficult it will be to carry out various types of physical activity, and this in turn affects their perception of these activities. Thus, a person who isn't very fit experiences steep hills as being difficult to climb, and this causes them to perceive the hills as being steeper even if they are just looking at them. -->Expected difficulty of carrying out an action can influence a person's judgment of an object's properties was also studied by Adam Doerrfeld and coworkers, who asked subjects to estimate the weight of a basket of golf balls before and after lifting the basket. Subjects made this estimate under two conditions: (1) solo and (2) joint, in which the subject expected that he or she would be lifting the basket with another person. The actual weight of the basket of golf balls was 20 pounds. Before lifting subjects estimated that the basket weighed 21 pounds if they thought they would be lifting it alone, and 17.5 pounds if they thought they would be lifting it with another person. After lifting the basket, the average estimate was about 20 pounds for both conditions. Doerrfeld and coworkers conclude from this result that anticipation of how difficult a task will be can influence the perception of an object's properties.