PSY3051 Week 2 - Visual Processing of Information

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

Expertise hypothesis

Our proficiency in perceiving certain things can be explained by changes in the brain caused by long exposure, practice or training. *Based on the fact that experience with the environment can shape the nervous system. *Eg: Greebles, FFA.

V1/V2

Parts of the visual stream used to process small/local-level features and details of a stimulus (eg: lines, corners, edges) at a very specific location in the visual field.

Oblique effect

People perceive vertical and horizontal lines better than slanted lines. *The brain's response is larger. *Explanation: maybe horizontals and verticals are more common than slanted lines in our environment.

Pathway from retina to cortex

Signals from the retina travel through the optic nerve to: 1. LGN. 2. Primary visual receiving area in the occipital lobe (V1/striate cortex). 3. Through two pathways to the temporal and parietal lobes. 4. Finally arrives at frontal lobe. *Our eyes collect light → the retina turns that light into electricity through transduction → it's sent through the optic nerve to the LGN → then goes to the back of the brain → then it splits into two different pathways.

Multidimensional

Stimuli that cause many different reactions that are associated with activity in many different brain areas.

Cortical magnification

The apportioning of a large area on the cortex to the small fovea; more space is devoted to areas of the retina near the fovea than in the peripheral retina. *Although the spacing of locations on the retina may all be the same distance apart, the spacing is not the same on the cortex; it is much greater. *This means that electrical signals associated with the place the person is looking at are allotted more space on the cortex than signals associated with places located off to the side. *Thus, the representation on the cortex is DISTORTED (more space given to locations near the fovea than to locations in the peripheral retina). *Fovea = 0.01% of the retina's area. *Fovea signals = 8-10% of the cortex's retinotopic map.

Hippocampus

The area of the brain associated with forming and storing memories.

Receptive field

The area of the retina that, when stimulated, influences the firing of the neuron. *They aren't fixed in place, but can change in response to where we're paying attention.

Lateral occipital complex

The brain region activated by objects (animate and inanimate).

Extristriate body area

The brain region activated by pictures of bodies and body parts (but not faces).

Parahippocampal place area

The brain region activated by pictures of indoor and outdoor scenes, rooms, environments; spatial layout.

Ablation/lesioning

The destruction or removal of tissues in the nervous system.

Retinotopic map

The electronic, organised spatial map of the retina on the cortex, organisation of cells that responds to the real world. *There is a specific location within the LGN that maps to a specific location on the retina, which maps to a specific location in the real world. *So if two points are close together in an object and in the retina, they will activate neurons that close together in the brain. *This was discovered through electrophysiology: recording from neurons with an electrode that penetrates the LGN obliquely (across/diagonal). → Stimulating receptive fields on the retina shows the location of the corresponding neuron in the LGN. **But if you put an electrode directly into the LGN through the layers (not obliquely; across), cells have overlapping fields. → Neurons along a perpendicular electrode track all have their receptive fields on about the same place on the retina.

Contextual modulation

The enhanced response caused by stimuli presented outside of the receptive field.

Specificity coding

The idea that an object can be represented by the firing of a specialised neuron that responds only to that object (eg: grandmother cell). *But, unlikely to be correct *Eg: even though there are neurons that respond to faces, these neurons usually respond to a number of different faces; there are just too many different things in the world to have a separate neuron dedicated to each one individually. *Alternative: a number of neurons are involved in representing an object.

Modularity

The idea that specific AREAS of the cortex are specialised to respond to specific types of stimuli. *When we move down the 'what' pathway, we engage more and more complex cells and can say what an object is with increasing levels of details (cells respond to more and more complex visual stimuli). *The further we move down the what pathway, the more complicated objects we're able to perceive. *Groups of cells that respond to complex groups of objects, eg: faces, scenes/ environment, the written word, inanimate objects, etc. *These cells are grouped in object classes. *One group of cells forms a MODULE.

Area V1/ Striate Cortex/ Visual Receiving Area

The occipital lobe; the place where signals from the retina and LGN first reach the cortex.

Optic chiasm

The point at which visual information crosses over to the opposite hemisphere.

Population coding

The representation of a particular object by the pattern of firing of a large number of neurons. *Eg: Bill's face might be represented by one pattern of firing, and Mary's face by a different pattern. *Advantage: a large number of stimuli can be represented. *Good evidence for this coding, but for some functions a large number of neurons is unnecessary.

Organisation of the visual cortex

The visual cortex is organised in terms of: *Location. *Orientation. *Eye preference. (Ice cube model)

Experience-dependent plasticity in humans

There is a part of the brain that responds best to words/letters, which are only things we experience through childhood → thus, this part of the brain is developed through nurture and experience. *Eg: fMRI experiments show that training results in FFA areas responding best to cars/birds (for experts in those areas), Greeble stimuli. *Thus: this is a learned process; the brain learns to recognise these things. *Eg: kittens' visual systems being shaped by the environment in which they were raised (selective rearing).

Complex cells

These are like simple cells in that they respond to bars of light of a particular orientation, but are unlike simple cells because they also respond to MOVEMENT of bars of light in a specific direction. *Eg: for a particular cell, if we have a bar of light moving vertically it won't fire. *But when the light moves 45 degrees in a left-to-right direction, this is what the light is tuned to, so it will fire. → So these cells are tuned to ORIENTATION/LOCATION, and also to specific direction of MOVEMENT. *This is measuring the stimulus-physiology relationship.

Orientation tuning curves

These show the response of simple cortical cells for orientations of stimuli. → Shows relationship between orientation and firing. → How the neuron is tuned to a particular orientation

Lateral Geniculate Nucleus (LGN)

This area processes visual information without changing it. *Major function: to REGULATE neural information from the retina to the visual cortex. *Its cells have centre-surround receptive fields (which detect stimuli within the field of view of a given neuron). *Signals are received from the retina/cortex/brain stem/thalamus. *Signals are organised via: 1. Which eye they came from/ retinal location. 2. Receptor type. 3. Type of environmental information.

Prosopagnosia

This disorder results from temporal lobe damage (damage to FFA); involves difficulty recognising the faces of familiar people.

Where pathway

This is the DORSAL pathway, in the PARIETAL lobe. → Back/upper part of brain.

What pathway

This is the VENTRAL pathway; in the TEMPORAL lobe. → Lower part of brain.

Sparse coding

When a particular object is represented by a pattern of firing of only a small group of neurons; the majority of neurons remain silent. *Eg: Bill's face would be represented by the pattern of firing of a few neurons, and Mary's face would be signalled by the pattern of firing of a few different neurons. *Some of these neurons might overlap; a particular neuron can respond to more than one stimulus. *Neurons that respond to very specific stimuli were discovered when recording activity in the temporal lobe of patients undergoing brain surgery for epilepsy (a neuron only responded to pictures of Steve Carell, and not to other people's faces). *However, they only had 30 minutes to record from these neurons, so they most likely would have found other faces that cause this neuron to fire if they'd had more time. *Thus, this neuron is probably an example of sparse coding rather than specificity coding.

Distributed representation

When a stimulus causes neural activity in a number of different areas of the brain, so that the activity is DISTRIBUTED across the brain. *Paradox: specific areas of the brain are responsible for perception of specific stimuli vs. a map of activity stretches over a large area of the cortex. *Thus, even though some stimuli activate specialised areas, the wide variety of stimuli we encounter in the environment causes activity that is distributed across a wide area of the cortex.

Selective adaptation

When we view a stimulus with a specific property, neurons tuned to that property will fire, causing neurons to eventually become fatigued; to adapt. *This has two physiological effects: 1. Neuron's firing rate decreases. 2. Neuron fires LESS when the stimulus is immediately presented again. *The adaptation is considered 'selective' because only the neurons responding to that orientation (eg: vertical) will adapt, and neurons that were not firing will not adapt. *Need to use this procedure to measure the physiology-perception relationship. *Example: when you leave a loud bar and can't hear well anymore. MAIN POINT: if you keep showing a neuron the same feature repeatedly, the neuron that is tuned to that feature will get fatigued → it will adapt, and its firing rate/response will decrease.

Tiling

Working together, location columns cover our entire visual field (like tiles cover a wall).

Selective adaptation experiment

→ To measure this phenomenon, we measure sensitivity to range of one stimulus characteristics (eg: we measure how good someone is at detecting feature X). → We then make them adapt to that characteristic by extended exposure (eg: staring at something for a long time). → Then we re-measure the sensitivity to range of the stimulus characteristic to see how adaptation has changed perception of the stimulus. (I.e., how sensitive are we now? How well can we detect the stimulus now?). *This experiment measures how a physiological effect (adapting the feature detectors that respond to a specific orientation) causes a perceptual result (decreases our sensitivity).

Memory in the visual system

*A study lookeed at where in the brain is active when looking at scenes, inanimate objects and faces. *Found similar locations for PPA, LOC and FFA as the previous study, but this study also showed that it's not just these areas that respond to these objects. *There's a whole network of regions that are active with respect to certain classes of stimuli - depends how complex the stimuli is, how similar it is to other stimuli you've seen. *This study showed that you don't just recruit the module of cells that respond to that visual object, you also recruit the specific MEMORY AREA that's given over to that class of object. *Eg: FFA also uses perirhinal cortex. *These memory areas also do difficult visual discrimination (as well as remembering things). Alzheimer's: *Alzheimer's disease is associated with a difficulty in memory. *But early Alzheimer's patients will say that the worst part of the disease is the loss of function —> they can't go shopping anymore because they can't remember where they've parked their car. *You assume that this is forgetfulness, but if you consider this problem from a perceptual point of view, it's also a visual decision as well as a memory decision. *You see a visual environment, and you need to engage the visual processing that detects scenes. So if the part of the brain that's responsible for visual discrimination is compromised, you don't know whether to turn left or right when leaving the shop. It becomes a visual issue in 'where to look'. *It's also hard to find your own car, hard to tell your car from all the other cars. *So if you have a damaged memory area, you also won't be able to tell the difference between objects. *So, we need to re-conceptualise what memory problems are —> also need to consider them in terms of visual dysfunction too.

Dougherty's cortical magnification experiment

*Dougherty used brain imaging to demonstrate cortical magnification in the human cortex. *A participant looked directly at the centre of a screen, so that a dot at the centre fell onto their fovea. *A stimulus light was presented in two places: (1) near the centre, which illuminated a small area near the fovea. (2) farther from the centre, which illuminated an area in the peripheral retina. *Stimulation of the small area near the fovea activated a greater area on the cortex than stimulation of the larger area in the periphery. → This demonstrates cortical magnification. *In another test, a person looks at a red spot at the centre of a paragraph of text. *The letter "a", which is near where the person is looking, is represented by a much larger area in the cortex than the letters that are far from where the person is looking. *So, the extra cortical space allotted to letters/words that the person is looking at provides extra neural processing necessary to accomplish tasks like reading, which require high visual acuity.

Left vs. right visual field

*Each of our eyes sees two different visual fields. *There is an imaginary line running down the middle of our visual field: everything to the left is your left visual field/ everything to the right of the line is the right visual field. *The left visual field is processed by your right hemisphere, and the right visual field is processed by your left hemisphere → OPPOSITE. *Each of our eyes sees both visual fields but the retina is split in half, so one of half each retina is processed by one hemisphere. *Eg: for everything in the left visual field, light ends up on the right hand side of each of the retinas, which is then passed to the right hemisphere. *The point at which the information crosses over to the opposite hemisphere is called the OPTIC CHIASM. *Then, that information is passed by the LGN to the occipital lobe.

Greeble stimuli

*Greebles = little creatures created by the scientist. *When you first see a family of Greebles, they look the same and you can't tell them apart (can't see what features define Greeble A from Greeble B). *But when you get exposure to them, learn more about them, have lots of experience with them - you can learn their defining features. *So after a while of figuring out who they are and what they do, you start recruiting your FFA to distinguish between them. → These specified brain areas are both evolution and nurture; this is a conjunction of nature and nurture, one is required for the other.

How pathway

*The "where" pathway may actually be a "how" pathway, because it's not just about locating objects but also telling us how to act on them. → Location and action. → Evidence from neuropsychology (neurons in the parietal cortex).

Coding summary

*The code for objects, tones and odours may involve a pattern of activity across a relatively small number of neurons (sparse coding). *Sometimes the groups are small (sparse coding). *Sometimes the groups are large (population coding).

Map on the striate cortex

*The cortex also shows retinotopic map. *Electrodes that recorded activation from a cat's visual cortex show: 1. Receptive fields that overlap on the retina also overlap on the cortex. 2. This pattern is seen using an oblique penetrating of the cortex (you find cells with neighbouring receptive fields).

Double dissociation/ patient D.F.

*This study involved posting a letter, but the slot can be changed in orientation. *The first part of the task is to orient the letter in to correct orientation that would be needed to post it in the post box. *This is meant to tell if one can detect the orientation (visual properties) of the post-box slot. *The second part of the task is to get them to actually post the letter, to see if they know 'where' and 'how' to act on the object (using the where pathway). *Patient D.F. had damage to temporal lobes, so she could not do the first part of the study; couldn't detect 'what' orientation to hold the letter in. *But, when she was asked to just go ahead and post the letter, she could post it correctly. *The info for the 'what' pathway was ignored because she could use her 'where pathway to post the letter. → This is called a single dissociation —> having difficulty with only one half of an experiment. *They then found people with damage to their parietal lobe (instead of temporal) and they showed that they couldn't do the opposite part of the experiment; they could hold the letter in the right orientation, but they couldn't post it in the post box. *So, they couldn't act upon the object's orientation because they didn't know where it was in relation to their hand or how to actually post it. → This is called a double dissociation - two different brain areas in two different groups of patients that show two different outcomes. MAIN POINT: these pathways are basically independent visual streams - your ability to detect what an object is is not dependent on your ability to know where the object is or to act upon it.

fMRI study

*This study was trying to figure out why some people are good at telling things apart, and why others aren't. *Eg: some people are really good at telling faces apart, or telling you what make and model a car is. *So we have little checkerboards that are different, but people will differ in how quickly or slowly they can detect this. *The retinotopic map allows us to figure this out. *If we look at the brain activity of someone who is telling these boards apart, we can map the brain activity onto the retinotopic map to see where in the brain is being activated to make this decision; to see the difference between these two images. *People who can do this task but just more slowly (not so good at the task) are actually using a different part of the brain than someone who is really good at the task. *People who are less good at this task are using parts of the brain that are much earlier in the stream, eg: V1 and V2. *The key features of V1 and V2 are that they process small, local level features and details (lines, corners, edges) at a very specific location in the visual field. *So someone who is not good at this task is trying to match up little differences, swapping between the two, trying to find the points of difference. → They're using local, feature-level information which is slower and less reliable. *People who are good at this task use V3 and V4, they're looking at the whole object, they almost ignore the local level details and take a big picture view. *So the best way to do this task is globally, not locally. *Using this technique, we can tell the reason for people's individual perceptual differences.

Summary of cortical magnification

*When you look at a scene, information about the part of the scene you are looking at takes up a larger space on your cortex than an area of equal size that is off to the side. *Eg: even though the image of your own finger on the fovea takes up about the same space on the cortex as the image of your hand on the peripheral retina, you do not perceive your fingers as being as large as your hand. *Instead, you see the details of your finger far better than the details of your hand. *Thus, more space on the cortex translates to better detail vision, rather than larger size. → What we perceive doesn't actually match the "picture" on the brain.

Selective rearing experiment

*When you rear animals to a particular environment, the neurons that respond to that environment will grow and multiply (or at least tune to it), and the neurons that are not used will disappear —> they'll lose the ability to detect things that aren't present in their early environment. *So Blakemore and Cooper reared kittens in a very specific visual world (in a stripy tube) —> that's all they ever saw, except for a dark room. *This meant that their simple cortical neurons and feature detectors only ever saw a particular orientation of the world (either vertical or horizontal). *So, kittens who only saw vertical lines LOST their neurons that would have responded to horizontal stripes. *After a few weeks, the kitten had relatively normal behaviour, but there is something strange about the way it behaves in its visual environment. *But a few weeks later after leaving that environment, once the cat had experience with the real world it learned to behave normally. *But even then, it's not quite right; eg: it can see a vertical stick in front of it, but if you make the stick horizontal, the cat can't see it and won't play with it straight away (doesn't know that it's there). *Only if you leave it long enough will it slowly develop the ability to see horizontal lines —> it will just take a lot longer, because plasticity in brain to learn new things is better when you're younger. → MAIN POINT: both behavioural and neural responses showed the development of neurons for the environmental stimuli, and the loss of others due to selective rearing.

Summary

1. Perception is affected by the eye's focusing system, and by the properties of the rod/ cone receptors. 2. Dark adaptation can be explained through pigment regeneration (chemical process). 3. Visual sensitivity/ detail vision can be explained by considering convergence (how the receptors are connected to other neurons). 4. Signals go through the receptive fields of neurons in the retina that form the optic nerve to the LGN to the visual receiving area in the occipital lobe/ to the temporal lobe. 5. Response properties of an individual neuron are determined by inputs from many other neurons. 6. Neurons at higher levels of the visual system respond to more complex stimuli. → Optic nerve neurons = respond to spots of light. → Visual cortex neurons = respond to oriented bars. → Temporal cortex neurons = respond to complex shapes/ faces. 7. The fact that neurons in different places of the visual system respond best to specific stimuli is evidence that the visual system is organised.

Hypercolumns/ ice cube model

A single location column with all of its orientation columns (0-180 degrees)/ left and right dominance columns, which receives information about all possible orientations that fall within a small area of the retina. → Thus, it is well-suited to processing information from a small area in the visual field. EXPLANATION: *We have the surface of the brain (cortical surface), and we have one hypercolumn; a column of cells that respond to a specific location in the visual field. *This ice cube (hypercolumn) is then split in half. *One side responds preferentially to the right eye, and the other side to the left eye. *But most respond to both (they just have a preference). *So within this, we have a complete set of orientation columns, and that responds to one location in your visual field. *Then, there's another hypercolumnn that responds to an adjacent visual location, and so on. → So the visual cortex is organised in this retinotopic, very regimented fashion.

Brain imaging

A technique that makes it possible to create pictures of the brain's activity. *Eg: MRI, fMRI. *How cortical magnification has been determined in the human cortex.

Module

AREAS (a brain structure/group of cells) specialised to process information about specific types of stimuli. *Eg: FFA, PPA, EBA, LOC.

Lesioning or ablation experiments

Ablation/lesioning (removing part of an animal's brain) is used to better understand the functional organisation of the brain: 1. An animal is trained to indicate perceptual capacities (eg: running through a maze, differentiating between objects, etc.). 2. A specific part of their brain is removed or destroyed. 3. The animal is retrained to determine which perceptual abilities still remain (i.e., see if they can still do the original task). 4. The results reveal which portions of the brain are responsible for specific behaviours. *Used the object discrimination problem and the landmark discrimination problem.

Fusiform face area

An area on the underside of the temporal lobe (in humans) that is specialised to respond strongly to faces. *Also uses perirhinal cortex. *Perceiving faces improves rapidly over time. *But not just for faces —> if you're an expert in a certain visual class, it will also activate this region. *Eg: if you're a bird-watcher, this is your profession, watching birds will activate this area. But for everyone else who doesn't care about birds, it will not be activated when looking at birds. *So this is also your EXPERTISE area (when you have lots of experience in telling something apart). → The common denominator with the FFA is learning and expertise.

Inferotemporal cortex

Area in the temporal lobe. *In monkeys, this area in the 'what' pathway only responds to faces (cells tuned to faces). *Those cells won't respond to buildings or flowers, only faces. *In humans, the equivalent area is called the Fusiform Face Area (FFA), located in roughly similar part of the temporal lobe.

What/where pathways

Both what and where pathways: *Originate in the retina and continue through two types of ganglion cells in the LGN. → They both go to the occipital lobe and then split off in two different ways at V3/V4! *Have some interconnections. → Mostly in occipital lobe; once they split off into temporal/parietal, they have fewer connections. *Receive feedback from higher brain areas.

MRI

Brain imaging technique that creates images of the STRUCTURES within the brain (established in 1980s). *Has has become a standard technique for detecting tumours and other brain abnormalities. *But does not indicate neural activity.

fMRI

Brain imaging technique that determines how various types of cognition activate different areas of the brain. *Haemoglobin carries oxygen and contains a ferrous molecule that is magnetic. *Brain activity takes up oxygen, which makes the haemoglobin more magnetic. *This scan determines the activity of brain areas by detecting changes in magnetic response of haemoglobin. *Put someone in a scanner → give them a task to do → measure their cerebral blood flow → measure how the brain is being activated. *The brain is divided into voxels. *This technique can be used to figure out the entire map of someone's visual cortex (V1, V2, V3, V4, V5).

End-stopped cells

Cells that fire to moving lines of a specific length, or moving corners of a specific angle. → Edge detectors. → But they don't respond to stimuli that are too long. → Eg: if you have a moving corner, this particular cell will respond to a 45 degree corner when it's moving through the receptive field in a bottom-upwards direction.

Location columns

Columns that are perpendicular to the surface of the cortex, so that all neurons in a location column have their receptor fields at the same location on the retina; OVERLAPPING. *Receptive fields at the same location on the retina are within a column. *Blocks of cells in the visual cortex that have receptive fields that are all looking at the same location in your visual world; overlapping receptive fields. *The striate is organised into location columns that are perpendicular to the surface of the cortex, so that all neurons in a location column have their receptive fields at the same location on the retina. → MAIN POINT: perpendicular = overlapping receptive fields.

Spatial organisation in the visual cortex

Different locations in the environment/on the retina are represented by activity at specific locations in the visual cortex. → Each place on the retina corresponds to a place on the LGN. → The organisation of objects in visual space becomes transformed into organisation in the eye when an image of the scene is created on the retina. → When an image has been transformed into electrical signals, these signals are organised in the form of "neural maps".

LGN organisation

Each LGN has six different layers, and each layer receives signals from only one eye. *This is how it sorts signals from the left vs. right eye. *Layers 2,3,5 = receive signals from IPSILATERAL (SAME) eye. *Layers 1,4,6 = receive signals from the CONTRALATERAL (OPPOSITE) eye. *Thus, each eye sends signals to both LGNs and the information for each eye is kept separated.

Orientation columns

Each column contains cells that respond best to a particular orientation. → Within a location column, there are a series of columns of cells that respond to specific lines/objects of specific orientations. → Neurons within columns fire maximally to the same orientation of stimuli. → Adjacent columns change preference in an orderly fashion. → 1mm across the cortex represents the entire range of orientation. *Eg:

Evolution

Evolution is partially responsible for shaping sensory responses: *Newborn monkeys respond to DIRECTION of movement and DEPTH of objects. → can detect objects without having any previous experience with the visual world. *Babies prefer looking at pictures of assembled parts of faces; hard-wired to look at faces. *This "hardwiring" of neurons plays a part in sensory systems.

Stimulus characteristics for selective adaptation

Gratings are used as stimuli for selective adaptation. *Made of alternating light and dark bars. *Angle relative to vertical can be changed to test for sensitivity to ORIENTATION; i.e., you can activate different simple cortical cells by changing the angle of the grating. *Difference in INTENSITY can be changed to test for sensitivity to contrast.

Rod and frame illusion

Healthy people can also show dissociation between the what/where pathways (not just people with brain damage). *In this illusion, observers perform two tasks: 1. Matching task - what pathway. 2. Grasping task - how pathway. *Results show that the frame ORIENTATION affects the matching task, but NOT the grasping task. *I.e., the line looks slanted (even thought that's just the illusion", but then if asked to act upon it by using their fingers to measure it, people put their fingers perfectly on the real vertical angle (no more illusion). The 'what' pathway is responsible for this illusion, and the 'where/how' pathway is not susceptible to it.

Mind-body problem

How do physical processes like nerve impulses (i.e., the 'body' part of the problem) become transformed into the richness of perceptual experience (i.e., the mind part of the problem)?

Selective rearing

If an animal is reared in an environment that contains only certain types of stimuli, the neurons that respond to these stimuli will become more predominant/prevalent due to neural plasticity. *I.e., "use it or lose it". *This is a long-term effect, as opposed to adaptation which is just a short-term effect.

Ocular dominance columns

If you cut a location column in half, one side prefers information from the left eye, and the other prefers information from the right eye (this alternates). *Neurons in the cortex respond preferentially to one eye. *Neurons with the same preference are organised into columns. *The columns alternative in a left-right pattern every .25 to .50 mm across the cortex.

Double dissociation definition

In one person, damage to one area of the brain causes function A to be absent while function B is present. But in another person, damage to another area of the brain causes function A to be present while function B is absent. *This helps us understand the effects of brain damage. *Eg: damage to temporal lobe = difficulty naming objects, but can name where they're located. *Eg: damage to parietal lobe = difficulty naming where an object is located, but can name what it is.

Object discrimination problem

In this ablation study: *A monkey is shown an object. *It is then presented with a TWO-CHOICE task - needs to choose which object is the one it saw before, and which object is the new one *A reward is then given for detecting the target object. → Main goal: differentiate between objects. **WHAT. Findings: the TEMPORAL lobe is responsible for this.

Landmark discrimination problem

In this ablation study: *Some food is hidden in a well beneath a target object. *The monkey needs to find the object in order to find the food. → So this one is about the location of the object. **WHERE. Findings: the PARIETAL lobe is responsible for this.

Perceptual organisation

Lines of the same orientation are perceived as a group that stands out from the surrounding clutter.

Location vs. orientation columns

Location columns: cells are responsible for same location in visual field. Orientation column: cells respond to different orientations of lines.

Method for orientation sensitivity

Measuring a person's contrast sensitivity different line orientations in grating. *They should all be roughly equal. *How to measure this: → Adapt the person to one orientation using a high contrast grating. → Re-measure sensitivity to all orientations. → Psychophysical curve should show selective adaptation for specific orientation if neurons are tuned to this characteristic (i.e., if cells got worse at detecting this one orientation but not the rest).

Method for contrast sensitivity

Measuring a person's contrast threshold by decreasing the intensity of grating until the person can just see it. *Psychophysics experiment to detect the threshold for intensity; just-noticeable difference. *Calculated by taking 1/threshold. *If threshold is LOW, the person has HIGH contrast sensitivity. *Sensitivity to lines needs to be good to detect lines in low contrast grating.

Iteration cycle

More information comes back to the LGN from the cortex than in from the retina. → Raw data comes into LGN, and then the LGN sorts it by getting feedback information from the cortex. → That information is then iterated (repeated) and re-sorted based on the cortical information flow coming back to the cells. → I.e., LGN interprets data and then says "did I get it right?"

Attention

Neural processing is concentrated at the place that is important to us at a particular moment. *When we pay attention to something, we become more aware of it, and can respond more rapidly to it; we may even perceive it differently. *Womelsdorf: → Attention can shift the location of a neuron's receptive field. → A monkey kept its eyes fixed on a white dot in the upper left of the screen, but was paying attention to the diamond in the middle of the screen. → When the monkey peripherally shifted attention to a circle on the other side of the screen, the receptive field map shifted to the right (even though the monkey was still looking at the same white dot in the corner). → This shifting of the receptive field shows us that attention is changing the organisation of part of the visual system.

Sensory coding

Neural representation for the senses, in terms of how neurons represent various characteristics of the environment.

Simple cortical cells

Neurons in the visual cortex that responds best to bars of a particular orientation. *Found by Hubert and Weisel. *They have excitatory and inhibitory areas. *Side-by-side receptive fields. *Respond to spots of light. *Respond best to bar of light oriented along the length of the receptive field. → If the light is in a line, the response will be stronger. → If the light is aligned with its favoured receptive field, the response will be even stronger.

Feature detectors

Neurons that fire to specific type/features of a stimulus because of the properties bestowed upon them by neural processing. *Eg: cells that respond best a specific orientation. *Includes: 1. Simple cortical cells. 2. Complex cortical cells. 3. End-stopped cortical cells. *They detect features of our world. *They are firing to orientations in every scene we witness, to help construct our perception of the scene. *Pathway away from the retina shows neurons that fire to more complex stimuli. → As we move through our visual pathway, the features we detect become more and more complicated. → Eg: if there are millions of these cells detecting all of our visual features in the visual world, how might they add up to represent a real world object. → To represent a square, we need four line detectors for specific locations of line orientations, and we need four corner detectors. → For a house, we need the same square + a triangle on top. → And so, we build this information up, these really simple feature level bits of info, into more complicated representations of the world. → As the info is passed down the visual stream, the information becomes more and more complex. → So the more feature detectors you add in, the more complicated we can build up the objects to be.


संबंधित स्टडी सेट्स

philo 243 exam 1 elizabeth anderson - private government

View Set

Society: Ch.10 Gender Stratification

View Set

chapter 24 muscle skeleton condition Pediatrics

View Set

HESI Case Study Coronary Artery Disease (CAD)

View Set

Practicing Using Correct Anatomical Terminology

View Set

Series 63 Missed QBank Questions

View Set

CISSP PRACTICE TESTS Chapter 4 ▪Communication and Network Security (Domain 4)

View Set

Geography 100 - Final (Chapter 11, 12, & 13)

View Set

Accounting for Business Combinations - Chapter 1,2,3

View Set