Cognitive Neuroscience

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

Functional

"job description" of visual cortical processing regions: motion vs category V1: early stage V2: secondary but still early stage V3: early V4: early -- all absent from higher specific categorizing

Somatosensory

(S1) and Motor (M1) Cortex: located along the precentral (M1) and postcentral (S1) gyri, separated by the central sulcus. Huge fold at the central sulcus that has the motor cortex movement and somatosensory cortex: somatic sensation split If you were to severe the two you can see the two mirror each other. Managed by a close collaboration of the two

The Supplementary Motor Area

(SMA) is located in the frontal lobe, superior and anterior to the primary motor area The SMA mediates well-learned movements, without need for explicit sensory cues - limited to just the mid line but on both sides of the hemisphere, cluster that becomes a central point of activity -- one step farther removed from planning process - particularly involved in highly reversed or trained actions, things we do frequently that become automatized - we can drive activity in this part our brain without ever making any explicit motor action University of FL study: -interested in emotional imagery - hear in headphones a script of a neutral, good, or bad, then asked to imagine a situation in correspondence to script -- peaked of activity for emotional imagination in SMA but not very significant - study shows the separation between supplementary motor area and explicit action - asked to remain still SMA (and cerebellum) is strongly active during imagine movements, demonstrating that these structures do not require actual movement to be active, and that sensory cues are unnecessary

Primary visual cortex: V1

(along calcarine fissure) receives retinal input according to the location of the stimulus in the visual field. sent here after LGN (left right/ upper-lower / central- peripheral) The left half of your visual field is routed to your right cortical hempishere, and vice versa. Upper field areas are routed to the inferior bank of the calcarine fissure. Central regions (1) route to the posterior calcarine fissure. 1/2 of tissue volume is dedicated to the peripheral areas. But make up a small area of concentrated visual field. Absorb a lot of V1 cortex b/c we're always trying to get as much detailed info about that thing that we're fixated on. Deeper in calcarine fissure the further you are from central fixation point

Human Ear

- 95% information coming from cochlea to brain - 5% information from the brain to cochlea, pretty rare for system to do - Inner ear --Organ of Corti All the green fibers are adjusting or doing the work of communicating with the brain. The great majority of activity flow is coming from the ear or cochlea to the brain. The ability for us to focus our attention is aided by this filtering tuning thats taking place partially in the cochlea itself.

Types of peripheral vision

- Central line of sight - Near Peripheral Vision - Mid-Peripheral Vision - Far-Peripheral Vision The different types of peripheral vision are loose boarders but generally within couple of degrees of your central fixation point is your phobia, small area about the size of thumb at arms-length

Why do we have 2 eyes?

- In humans, there is considerable visual field overlap that enables different visual info to be processed simultaneously. There is wide variance across prey/predator species in peripheral vs depth processing. - Predators have strong overlap and allow great depth perception sensitivity. - Prey have very little overlap and more focused on maintaining the broadest sensitive to surroundings. 1) Greater acuity 2) Spare eye 3) Larger visual field 4) Binocularity LGN redistributes the split in eyes field of vision LGN layers that strictly linked to one region of visual field. These different layers are then sent to different layers of the visual cortex. V1 cortical sheet is made of many neurons differentially taking in info and sending out info. Layers 2 and 3 mix the two visual fields so they have binocular cells so they info from both eyes are being blended together. Layer 4: distributes or keeps separate these monocular cells and they are only going into one region. Mixed eyes (both eyes) Separate (one eye) - Don't have much information how this process works/function

Sound

- Sound is a compression wave of air - Auditory stimulus: tuning fork - Air pressure across distance which is coming to our ear then converted to an auditory response - Natural sounds are complex - multi-frequency

Functional difference and concentrational difference between cones and rods across your peripheral vision

- The fovea (central fixation: 0 degrees) is concentrated with cones, while rods dominate peripheral vision. - at fixation point (where your eyes are located at a given moment) there's huge focus of concentration of the cones, a real spike distribution - and then very little cone distribution anywhere with peripheral vision Just the opposite for rods: lots more and smooth distribution across the peripheral areas and more of a normal curve EXCEPT for right in the middle, there is very little and essentially are being forced out by the cones

Nonlinear sensitivity

- We can feel 2 points as close as 1mm apart on the fingertip, but need ~40mm between points to feel 2 on the trunk or calf - We generally don't need much sensitivity outside hands and face, but this is plastic - if you had no arms, other body parts could develop enhanced sensitivity - regions that are more sensitive to touch or less are managed by the cramping in of a lot of transducers, skin sensitive devices. Very sensitive due to a lot in one location and very insensitive due to lack in one location. Minimum discrimination distance of 2 points Sensitivity is far less sensitive on back/trunk and calves

Figure 5.26 Cerebellar inputs and outputs

- also active during non motive stuff - The exact operation of the cerebellum is unknown. Poorly understood - We know that is receives huge amounts of input from cerebral cortex (frontal, parietal), spinal cord, midbrain sensory nuclei, and this input is processed in some way, and then error correction signals are sent through the deep nuclei of the cerebellum to motor thalamus to adjust movements. - The cerebellum is not limited to motor control - its also active during non-motor tasks. Revealed in recent fMRI work - While poorly understood, the idea is that the huge processing power of the cerebellum is "repurposed" to serve as a "what if" function for potential future behaviors. Pre-error monitoring - Motor tuning, error correction, PMC/Motor - Behavior tuning pre-error correction, PFC/Multimodal - These two are non overlapping - imagines how things might go then points out errors for future planning and will it fit with goal state NEED TO KNOW: - that 1st of all we know generally how the motor system is laid out but we're pretty sketchy on the details, we have a relatively firm understanding of the prefrontal cortex and timing is pretty well understood in the basal ganglia and inhibition mechanism, and then we know that the cerebellum is contributing strongly to fine tuning of behavior and real time tuning to behavior: goal vs what is actually happening. Then, FINALLY beyond that we are sure that the cerebellum also contributes to non motor behaviors in an abstract way by allowing us to perform error correction about things that haven't even happened yet. And to imagine if things go one way vs another way

Second grouping of somatosensory input

- distinct from discriminative touch - pain and temperature, and includes the sensation itch and tickle across the body Trigeminal nerve stimulation from pepper, ammonia, (noxious stimulation in face, mouth, nose & eyes) also falls under this group Pepper is not a taste or a smell but is instead driving trigeminal activity. -- not taste, pain -- hotness or pepper will progress through your life, slight adjustments can happen which is driven by the density receptors but for the most part it isn't going to change the sweetness or sourness if you eat hot stuff you'll eat it more and more because its working on those differences Once acclimated you can move up from there Dr. Sab love kimchi

Real time tuning: Cerebellum

- huge role in motor action and particular tuning and fine tuning of motor actions - one of last frontiers of neuroscience - real time error correction of motor actions - keeps tabs on all planned actions and current actions and compares, is what I intend to do congruent with what I'm actually doing The cerebellum serves to coordinate the complex functions of sensory - motor integration. While only 10% of brain volume, it makes up almost 50% of neurons of the whole brain. It is thought to be a huge 'data processor' - crunching the numbers to adjust motor output to sensory feedback, thus enabling precise motor control in changing contexts.

Primary motor cortex

- in animal model we can stimulated parts of the sensory motor strip and evaluate what the linkages are for a particular animal In an animal model, complex, multi-joint movements can be evoked by stimulation at a single location on the primary motor cortex. Again, movements instead of muscles are orchestrated. movement or actions are coded in the somatosensory center instead of individual muscles

Niels Birbaumer, 1984, 2004

- readiness potential work, realized that a subset of population had progressive motor neuron diseases, which slowly lock someone in their own body (ALS) - could potentially train these people using EEG surface electrodes, to move an EEG line up and down on a screen and ask them to imagine something they can think of or plan while they can still move until they hit on a memory that can drive an EEG response - then use the EEG negativity to move a point across the screen, for simple yes or no decision - can use EEG communication when movement is completely gone - BUT only works in 30%-40% of patients - very noisy system We can also exploit these reliable EEG patterns to communicate with patients with late-stage motoneuron diseases. This is the origin of 'brain-computer interfaces'. - brain computer interface, idea that we could track brain activity with EEG and use that to communicate or directly learn about someone's intentions without the need for them to make an explicit motor action

Harmonics

- the "flavor" of sound (timbre) - we can distinguish musical instruments from one another because of harmonics Guitar - difference is the length of string and the number of folding within the fixed points

What is special about music?

- weakly linked to some evolved purpose - Boddy McFerrin: pentatonic scale is universally understood

Vision for skill learning

1st) a unidirectional pathway originates from nearly every area in the occipitotemporal network and projects to the neostriatum supporting the formation of stimulus-reponse associations -- no feedback from ventral straitum to iT 2nd) cortico-subcortical projection is the uundirectional occipitotermporo-ventral striatum pathway which also originates in aTI and supports the assignment of stimulus valence. - ventral stritatum = nucleus accumbens = reward processing 3rd) different areas within IT five rise to the occipitotempor0-amygdaloid - amygdala projects back to almost every area in the occipitotemporal network - this feedback helps us change our behavior - feeds back visual system - bidirectional feedback - The neostraitum does not send feedback. - Gets feedback but does send any - Motor skill learning is not verbalizable, and takes place in neostriatum. Impossible to teach verbally. The precise role of the neostriatum in this implicit learning process remains unknown. - Doesn't feed the vision section. - with its dense limbic and medical prefrontal cortex connectivity, suggest that the region may integrate the output of reward-based systems for the purpose of motivating or prioritizing actions

Cochlear Implants

2- 24 artificial excitation points in cochlea can be enough for people to decipher speech and other complex sounds can get up to 128 frequency points Sounds are ordered by frequency in A1-like a piano or tonotopic. Primary auditory cortex is not visible from an external view. Laid out to respond to different frequencies in sound. Low frequencies drive this anterior edge. High frequencies drive the posterior edge. To brain, from ear: Afferent axons, Basilar membrane, Inner hair cells From brain, to ear: enables attention to filter or enhance relevant input, Efferent axons, outer hair cells

The somatosensory system includes

3 subtypes of physical sensation from the body, including 1) touch, 2) temperature & pain, and 3) joint and muscle position sense. These modifications are channeled through 3 parallel pathways in the spinal cord and brainstem, ultimately feeding into the postcentral gyrus -- otherwise known as primary somatosensory cortex (S1). Does so in subdivided regions

The preparation & planning of motor behavior:

= premotor cortex The execution and timing of motor behavior ("gating"): Premotor >> Basal Ganglia >> Primary motor - exact timing of any action takes place in subcortical structures that manage the exact beginning and ending of an action

Central pattern generators (CPGs)

CPGs are spinal cord circuits that control rhythmic movements such as walking, running, and shifts from one phase of action to another using solely lower motor neurons and sensory input. Amazingly, the spinal cord can control these shifts without cortical input. - limited only to the spinal cord, they get no brain input -- do independently from the brain -- manage typical rhythmic movements like locomotive --managed at a low level - late 80s and 90s identified/still studying CPGs -- found that in particular animal models that had either cooling of the cortex or had cuts in the cortex to prevent any central brain input into the spinal cord...animals that have 4 footed locomotion were able to execute normal locomotion on a treadmill perfectly fine --if you adjust speed of treadmill from fast to slow there is a transition to how the limbs are organized they go from left right left right to two limbs working together, this transition from one form of locomotion to another form of locomotion requires a lot of organization -- shifts the organization of how the limbs work together --incorrectly thought in past that this shift required brain power/input --can even adjust steep of treadmill and still doesn't absolutely require cortical involvement -- we adjust our tread of walking when we trip --we are constantly slipping into these comfortable modes of motor action, to quickly figure out what to do then let it go

Test question: How is it that we can create a single perceived object from multiple virgin eyes representation?

Combining overlapping and non overlapping information by distributing information across different layers of the LGN and by mixing across multiple steps in the visual system there isn't any single one answer for this question

If we get two independent views of any object, how do we perceive just one?

Despite considerable differences in our left and right eye's visual field - we see one unified image. How is this convergence accomplished. - allows us to create a representation of what it is but also how far it is - blend the two perceptions - no strict or agreed upon process or location where we think this convergence happens No one really knows.

Secondary Auditory Cortex

Does more high order stuff the area of the temporal lobe surrounding the primary auditory cortex, where pitch, loudness, and timbre are perceived and specific sounds are recognized

Clusters of neurons in different regions..what are they doing?

For any given part of the visual field, V1 neuron clusters are sensitive to specific orientations of visual stimuli, among other simple visual features. - we don't exactly know Earliest work/well known work: -- Orientation tuning, the sensitive of a cluster of visual neurons to detect lines or edges that have a particular orientation relative to us. In animal and human models we found if we present a particular line at a particular orientation and record from a cluster of neurons in the brain, we can find a certain regions where lots of neurons are located that are sensitive to one orientation in one position in our visual field. We can track/ guess that if we put a bunch of inter-cranial electrods in regions 3 of the lower visual field we'll know that somewhere along the visual field. The vertical line produces a lot or neuron firing. And orientations close to that sweet spot show some firing. Then no firing with orientations that are the opposite of the vertical line. End with a bunch of V1 clusters that map out the orientations that are in our entire visual orientation field.

FFA

Fusiform "face area" is not purely face specific, but is better considered as a "visual expertise" area. Humans happen to be face experts. This is evident in humans with other types of 'expertise'. not a face area but a place where you have a lot of expertise or looking at something you really love Isabel Gauthier (2000 - 2007) showed bird and car experts images of people's faces to see corresponding brain activity. Cluster of activity for car expertise which met threshold but wasn't a lot. Much larger cluster of activity for bird experts. However, the activity changed when the experts viewed images of their expertise thus producing lots of brain activity or FFA.

Grill-Spector & Malach, 2004 experiment

Graphs for V Locations: y - responds to motion x - responds to certain kind of object V1: no advanced activity for face or scrambled face, no advanced activity if the face or scrambled face is moving...no sensitivity to category PPA, FFA, LOobj, and LOface: they are all sensitive to where something is an object or face, but doesn't care about motion Mt: Middle Temporal - middle gyrus, small area but very sensitive to motion, but does show much sensitivity to category V1, V2, V3, V3A, V4: they don't really care about the category but slight enhancement in V3A & V4 care a little about motion and categories but v1, v2, v3 don't really care

Fundamental missing pt 2

If we hear sound frequencies that are harmonics of a lower, base frequency, such as; 300Hz = normal sound 450Hz = "150Hz 600Hz 750Hz - We'll hear 150 Hz when we hear the multiplictics listed above We perceived them and combine and reflect a 150Hz fundamental frequency. Even though there is no signal at 150 Hz. the steps/difference from one multiplictic to another tells you what the fundamental frequency will be In our life experience, every time we hear a 150 Hz fundamental, its accompanied by multiple harmonics, so when we hear the harmonics alone, we can't avoid perceiving the fundamental.

Sensation + Expectation 2

If your past experience was filled with ducks, you're more likely to 'see' a duck. This perceptual 'tuning' is achieved through feedback loops across the brain. The brain doesn't operate in a linear way or terminal way, instead in the last 50 years we've learned the brain processes things many many times over. And every step of processing there is feedforward and there is feedback. At the earliest levels of the visual system we have only received feedforwad information from the eyes and subcortical (V1) structures. But once we get to the cortical areas we start feeding forward as well as feeding back. These feedback channels are what can start to guide our sensations into a particular categorical perception.

Fusiform Face Area: Faces. or expertise?

It's an expertise region of the brain. Dr. Sab believe this is a Expertise driven process! Starting from scratch: Create "greeble" individuals and families that share feature similarity Collect fMRI in greeble novices, then train them to become greeble categorization experts, and collect more fMRI data. This was to prove that the face recognition area was hijacked by the expert clustering of images that works for other than human faces. The brain lit up for faces and greeble. Showed the field you could develop a region of your visual system in a short amount of time. To respond to a highly familiar class of objects. You can go from showing no reactivity to very fine and specific activity by making use of this region of the brain.

Regions of ventral visual cortex

Late Stage: inferior temporal (iT) visual cortex, mid back of brain Early/Mid stage - occipital visual cortex, primary cortex goes back of brain to the mid by the calcorine fissure looking through the feet at inflated cortex (bottom side view) - Lots of fMRI and intracranial data suggests that there are multiple ventral visual regions dedicated to the processing of categories of visual stuff, like simple objects, body parts, & words as well as faces. Notice: not much in early/mid stage Red areas: in middle lobe of inferior ventral area, well balanced on left and right Green areas: more inward Blue areas: interested in objects specificity, more on lateral edge Yellow areas: process body parts, looking at something living All early to mid stage regions are left out of the figure and don't contribute in any great way to high level categorization, they do basic work in visual stream Middle-Inferior temporal: is face specific Medial inferior temporal : place specific Look at bold symbols over time and how the increase and decrease corresponds with the image the person is looking at

The homunculus, revised

More recent evidence suggest that primary motor cortex (M1) does not have a regular and organized somatotopic pattern Instead, circuits in M1 show an overlapping representation plan of 3 major segments (homunculus) body parts, including the upper limb, the lower limb, and the head and neck. -- study by J. Sanes, 2009 -- breakdown is not as clean as once thought

What happens if cerebellum is compromised? Cerebellar ataxia

Moving your finger between your nose and a fixed point in a space requires significant cerebellar input. This is a quick neurological test of cerebellar integrity. - sign of internal bleeding or damage to cerebellum/failure of cerebellum feedback will result in the task being very wavy and hard to manage

Figure 4.10 Cutaneous and Subcutaneous receptors

Multiple receptors in the skin are involved in the perception of discrimination touch. Recognize that: there are lots of different sorts of transducers imbedded in the surface of our body that allows us to convert different information into brain input free nerve endings = pain --near the surface of the skin Pacinian corpuscles = rapid vibration -- detect pressure & vibration (high frequency: buzzing) -- Ex: bug lands on you or passes ear Ruffini endings = deep pressure -- Ex: strong hug where it can pass threshold into alarm , where compression become dangerous and sets off alarm -- close hand in door Meissner corpuscles = slower vibration

Sensation + expectation

Our visual systems represent the world with limited information, by making use of lots of past experience In other words, what we see is a construction of what comes into our senses, guided by what we already know about how the world works. Vision is a construction and it is not a clean representation of reality but is instead some mixture of whats out there in the world and what shows up in our eyes and what we have experienced before, and how these two things mix together leads to what we actually perceive. We are always taking in a rough and dirty estimate and channeling it through what we know to categorize it into what we know.

How ( and when) does experience bias perception?

Past experience ( or expectation) biases V1 visual activity after the first wave of activity. The first wave of feed back processing (~ 50 ms + after onset) is stimulus driven, while later V1 activity (~100 ms + post onset) also includes feedback effects from other late visual cortex and perhaps other areas. ~ 50ms: duckrabbit > 100ms: duck OR rabbit 50 -70ms: raw input to V1 100-150ms: feedback to V1 to "guide" categorization 250-300ms: decision

Motor Control

Primary motor cortex (M1) is proportioned according to the sensitivity of control. Hands and face over-represented relative to body size. - despite homunculus layout of the sensory motor strip, we can identify activity from a pretty archaic fMRI overlap - Clusters of activity are associated w/complex movements rather than individual muscles - what we find when we ask for a simple motor actions is that we tend to see clusters of activity in the brain that don't necessarily reflect some combination of muscles BUT INSTEAD reflect this whole orchestration action Clusters of activity in the brain are not combining all the muscles that are required, so you don't get a little bit of each, instead you get one cluster representing reaching out

The plasticity of somatomotor perception

Rubber Hand Illusion Hide a subject's real hand, and place a realistic false hand at a correct distance & position. Stimulate both the visible false and hidden real hand in unison for 3-5 minutes. Subjects soon report feeling ownership of the false hand. Subjects "feel" strokes even when real hand id left untouched, and react defensively if the false hand is threatened. This effect reveal the constructive nature of body ownership. - forces us to accept that even our basic understanding of what is and isn't our body is very flexible . we can do that by creating illusion that the somatosensory input reflects what's happening to our body

Pleasantness in music

Salimpoor et al, 2011 Pleasant (vs neutral) music activates dopaminergic reward structures, including subcortical nucleus accumbens (NAcc). - really good music is rewarding - study recording people recording 'chills' they got when they listened to music - used PET scans, used Craclorpride - mimic dopamine - when experiencing chills you are having a huge flow of dopamine

Pathway to Cortex

So how do we get from the retina to early visual system? - Which is The primary visual cortex at the bottom. If we maintain a central fixation everything to the left of that fixation point is what we call the "left visual field" and it ends up in the right hemisphere of the brain. And the reverse, for the "right visual field" which crosses over into the left hemisphere . This crossing over takes place in the optic chiasm so both optic nerves are channeling from the back of the retina, they meet in the middle, in the front of the mid brain and gets switched over so it ends up on one hemisphere of the brain. Along the way there are a couple of jump off points, very early info is sent to a couple of mid brain structures and these are things that are critical to our behavior that don't require a lot of high order processing. So, circadian systems are appreciation of daylight cycles branch off pretty quickly. Immediately after the chiasm. - Pupilary reflect: controls pupil dilation and constriction of the pupil is handled through a very short immediate process. This is why tv shows or people w/ post head injuries their unconscious, one of the 1st things physician will do is test this reflect. If you are unconscious but your reflexes are still in tact we can we pretty sure that at least there isnt a major brain bleed thats compressing and destroying brain tissue. If however you dont have a light reflect this is HUGE BAD warning sign and the person most likely has internal bleeding, take to hospital immediately. Retinal activity crosses over the at the chiasm, such that left or right visual field is processed in the opposite lateral geniculate nucleus (LGN) of the thalamus (and hemisphere). Other pre-cortical visual processing involves the light cycle, pupil control, and motion tracking/gaze orientating. Finally outputs to the superior colliculus, small structure, located at the top of the brain stem, that assist in gaze direction. - ever had something appear in your visual field like an ad banner on website that moves and shifts this is taking advantage of the very reflexive tendencies that we have to target for anything thats moving in our environment. At a low level it's extremely hard to inhibit, which why jerks in the ad department take advantage of it.

Auditory System

Start: 09/24/2020

Visual angle

The angle of an object relative to the observer's eye = object size + distance

Figure 5.21 The Basal Ganglia loop that starts and stops movements (Part 2)

The basal ganglia rapidly control movements by "inhibiting the (constant) inhibitor" - Takes advantage of precise timing by remaining in a constant inhibition - its much quicker for us to enable something to happen in our brain by turning off the inhibition then it is to turn on excitation The SMA & premotor cortex initiate a movement command. This leads to activation of caudate & putamen (C&P). C&P inhibit the globius pallidus (GP) - Before the GP had been inhibited it had been maintaining a constant inhibition over nuclei of the thalamus. - at any given moment it is inhibiting motor actions across your body - therefore to enable a quick motor action we need to turn off that inhibition The GP releases it's tonic (constant) inhibition (or button of inhibition) of motor nuclei of the thalamus. The thalamus now excites primary motor cortex and enables movement. A similar process is used for stopping a movement: C&P excites GP, which inhibits thalamus. - to stop the process everything is just reversed WHAT YOU NEED TO KNOW: - is that we enable quick motor control by maintaining constant inhibition on subcortical structures that dampen or prevent action - and to release that inhibition we excite ones subcortical structure to turn it off - we want to inhibit an inhibitor to enable fast motor action or motor gating

Visual Perception

The basic properties of visual perception can be described by factors including: lightness, brightness, color, form, depth, and motion. These perceived qualities are distinct from the physical properties of visual stimuli as measured by objective methods. Lightness: quantity of light (perceived to be) reflected off a surface Brightness: (perceived) intensity of a source of light (sun, bulb, candle) Ex: the lightness of a picture (a perceptual quality) is not directly related to the luminance (a physical property) of a picture. Lightness could be described as a calculated value, dependent upon local context and prior knowledge. We see the light source and shape and calculate what we think is the "real" brightness - taking the context and our prior experience into account.

Feedforward vs feedback vision

This leads to a developmental theory of the functions of the alternate and central routes. Early in development, the central route might be important for creating representations of complex stimuli in aIT "central route"= linear, feedforward Alternate route= autobiological memory, long term planning, imagination, working memory, etc - early on in your life people are primarily learning associates with a limited ability to think outside of here and now - At 5, 6, or 7 year old you can without any direct cue you can call on experience in life in rich detail (forward thinking, creativity, imagination, etc) Primary feedforward or central route of vision system activity is linked to learning association Alternate routes are more linked with more abstract non sensory behaviors which excludes motor learning

Two eyes one perception

Unitary vision is not simply a combination of left and right eye info. If we present conflicting info to each eye (binocular rivalry) our perception switches back and forth between the two. Thus, there is some competition and choice between conflicting incoming information, reflecting downstream (late visual & prefrontal cortex) selection processes that bias our perception toward one stimulus. --Not single averaging but direct competition. Allow us to choose which we most believe Any mixture of the two is very brief but the end result is we imagine one is correct and the other is incorrect. But will continue to bounce back and forth. In real world this never happens because we blend left and right info at multiple layers of processing. LGN, V1, and prefrontal cortex combine to help with what we see and come to a conclusion.

Scene Perception

We argue that the ventral visual pathways is a recurrent and highly interactive occipitotemporal network linking early visual areas and the anterior IT cortex (aIT) - late stage FG along multiple routes through which visual information is processed system not linear but a feedback system Early stage: feedforward After/late: becomes feedback Authors: Kravitz, Saleem, Baker, Ungerleider, Mishkin

Why do we depend so much on context?

We lack sensory ability to distinguish among different modifiers of lightness. Such as: intensity of source, surface smoothness, clarity of air, etc We also lack the ability to differentiate the true size and speed of objects. Color perception depends on regions of ventral visual cortex. Patients with localized lesions (stroke, tumor) in the region of V4 often report a specific loss of color vision. Other visual abilities are intact.

Motor planning occupies

a large proportion of cortex, with a lot located on gyri - a good place to record EEG - historically to investigate motor behavior they used surface EEG We can see this cortical activity with EEG as people prepare to move - earliest study in the 40s and 50s then a ton in the 60s and 70s - study were designed to be cue driven or respond at regular intervals with no cue or press a button whenever they felt like it in few seconds - found that there is a very consistent negative going cloud that proceeds the onset of the movement that lasts a long time so half a second or a second before the actually action begins - can see EEG preparation for movement that occurs right over the premotor area of brain - electrods on arm were to track muscles action to then use it to create a zero point and look backwards in time and see what was happening on the brain or scalp surface in the 100s of miliseconds before that action - old days negative voltage is up and positive in down -negative shift begins a longtime before action

Know a little

about complex auditory processing most research focuses on language and music in most folks there is a right hemisphere bias for processing music but this bias is limited to higher order ("belt") auditory cortex Some research in newborns suggest a right hemisphere bias for music perception in auditory cortex (as long as the music is consonant) - consonant is music that sounds good to us -- Study by Perani et al., 2010, PNAS left ear didn't show difference in original music and altered music but the right ear showed more % signal change for original music and very little % signal change for altered music more pleasant music drives reward circuit + auditory cortex Patrik Vuilleumier et al., - wanted to differentiate pleasantness from activating nature in music - Vogner and ominous music would be arousing but not pleasing - arousing music drives auditory cortex, amygdala - pleasant music drives amygdala, nucleus accumbens

Movement timing:

all managed by the Basal Ganglia The "basal ganglis"is made up of the caudate, putamen and globus pallidus (external and internal segments) The critical function of the Basal Ganglia is to "gate" the onset and offset of movements. SHOULD KNOW: that this set of subcortical structures is managing the interplay between premotor cortex and primary motor cortex for the timing of action

Late-stage visual cortical

and non-visual (prefrontal) cortical structure can "tune" perception, using past learning experiences. Mosha Bar: put together a model that may explain this, Early visual subcortical (gets copy of low spacial frequency) - 1 way ventral visual stream to iT, 2 way early visual subcortical to OFC then iT send early copy somewhere to compare to what we've seen in the past then use that info to guide or aid us to quickly categorize stuff

Our ability to practice

and perform motor skills is astounding. The degree of refinement is difficult to appreciate. To achieve this sort of control requires immense and nearly instantaneous feedback control. We act in concert with this perception action loop. - We don't know fully understand how the motor system achieves what it does

Rods

are photoreceptors that are sensitive to brightness. - detects just the intensity of light - interspersed with cones that are sensitive to specific frequencies of light - cones come in 3 flavors that are sensitive to different frequencies: red, green, and blue - combination of all that input gives us overall brightness as well as sensitivity to 3 colors We mix those 3 and extrapolate to fill in all the colors in between The eye is a transducer that reverts light into neuronal firing, a particular sorts at the back of the Retina.

Cones

are photoreceptors that are sensitive to color.

By recording EEG

as people make movements at their own pace, we see reliable negative voltage shifts that precede the movement by up to 1500ms. - readiness potential we think is almost 2 secs before the action and the has different slopes proceeding the action - EEG voltage pre-movement likely reflecting premotor activity - if we know far ahead of time that an action is happening its possible that we get a window that we predict motor actions a person is going to make before they themselves know that they will make this act - With enough data, we can begin to predict when movements will happen in individuals, even before they know it - if you have enough time before and an algorithm you can predict future action TAKE HOME MESSAGE: - that there is a lots of motor preparation happening in our brain that proceeds even our own awareness of making this action, especially in simple motor acts that are self driven, our awareness is pretty late to catch on that we're going to do this - very difficult thing to establish and varies across people, difficult to define - people don't really worry about now but for US IT DEMONSTRATES that the motor actions that are required for what seems to be a simple action take a long time to prepare and have a pretty clear signature

Functional job description of contrast:

at 5% contrast we can't see anything on the screen (THRESHOLD) 10% we can see contrast activity in V1, v4, and Laterial Occipital Cortex or LO 100% we see contrast activity in V1 (doubling activity), V4 shows gentle increase, and LO shows no increase/no change LO: doesn't care about contrast V1: cares a lot about contrast, reactivity is linearly connected to contrast of image V4: cares a lot less and only cares if it is identifiable, but beyond that it doesn't care that much The further you go in contrast the late areas don't care

The ventral visual pathways: an expanded neural framework for the processing of object quality

authors: Kravtiz, Saleem, Baker, Ungerleider - Some models focus on visual cortex; it churns information in a repeating patterns of feedforward and feedback waves. Repeating loop and activity is rising and falling

Other simple features

brightness, contrast, shading, textures, also built into these retina topic areas of the visual field

A global distinction

can be made between "what" and "where" pathways - although recent data suggests the streams are more mixed lots of crossing between the two streams Dorsal Stream: Analysis of motion and spatial relations 1st stage of visual cortical processing in the primary and extrastriale visual cortex - info splits into ventral into temporal cortex or other way that goes dorsally into the parietal cortex. They operate in parallel but they are in slightly different jobs Dorsal Stream: concern for where things are positioned in space and where they are moving in space, and keeping track of where things are relative to you at any given moment, the where function Ventral Stream: - analysis of form and color concerned with what exactly it is that we're looking at, more precise detail about what it is, trying to make fine predications and categorization of what we are seeing, is the what function

If Visual perception isn't simply

driven by the physical properties of stimuli (luminance), and lightness illusions of perception aren't contrast effects - what is controlling the perception of lightness? Purves and colleagues suggest that perceptual qualities are determined empirically - by prior experience. We learn how to judge lightness. Thus, infants would initially see the stripes as equally light, but develop the "illusion" over time. 99.9% of the time this expectation is an aid to us

Somatosensory representation is

dynamic: it shows massive plasticity across a lifetime Stimulation of digits 2 & 3 over several months leads to an expansion of somatosensory cortical areas - 30/40 yrs ago neuroscience thought that once your fully grown your somatosensory system is fixed, so parts of the brain are dedicated to handling face, hand, elbow, etc and that is permanent/lifelong linkage. THAT IS NOT THE CASE. Modern medicine: shows that people with missing limbs survive so it shows the brain can be forced to adjust to adapt to a new body structure Or phantom limb sensation or phantom limb pain this allows us to track the brain progression or regression -- allows us to see/track plasticity of brain This idea was hammered out in an animal model. Used number on a hand to be linked together to brain chunks. To identify where each hand was maned in certain brain regions. Stimulation of fingers will show expansion of respective regions in the brain but once stimulus is over the areas will reduce to origin size/shape - this means for how we think about organization in the somatosensory system/whole brain is that things are a lot less fixed then we once imagined and that any major changing in habits or demands also has a corresponding change in how your brain works

The ventral visual pathway is reentrant, nonlinear

feedback connections and others of which bypass intermediate areas allow direct communication between putative early and late stages of the hierarchy we can see the effects of this jumping forward - backward masking - when you present one stimulus and then present another image it makes difficult to fully resolve the 1st image - In a purely feedforward system, there should be no effect of a mask presented AFTER a stimulus evokes the initial neural response, yet the mask profoundly impairs both performance and awareness.

Grill Spector et al, 2010

general purpose to try and identify parts of the visual system that are pretty clearly linked to some category of objects - Conscious awareness of faces as revealed by visual cortical activity via fMRI - Subjects studied face photos. In the MRI, the studied faces and 50 faces were shown for just 33ms each, followed by a texture pattern. After each trial, the subjects reported if they saw 1) saw a face, and if so 2) could they recognize the face as from the studied set? After the series, the research separated the trials into 3 categories: Identified: on retina fully "seen" Detected but not identified: On retina partially "seen" Not detected: On retina but not "seen" if they can identify correctly that they saw a particular face before then that reflects conscious awareness The tracked activity with fMRI in 4 visual cortical regions: -- V1, V4, LOC, FFA V1: On retina fully seen, no statistical difference V4: no statistical difference LO: triple of activity when they do see a face, face sensitivity but linked to a specific face FFA: dedicated to faces, really clean and statistically significance difference between all 3 conditions, if you don't see a face very small activity, if you see the face and you know which face there is a strong pop, face sensitive for a particular face FFA highest ranking in face recognition and LO second ranked other two don't care a lot Explicit, clear awareness of face is most strongly tied to FFA activity Doesn't imply it only does face processing. This is a logical fallacy

Discriminative touch

includes touch, pressure, and vibration perception. DT enables us to "read" raised letter with out fingertips, or describe the shape and texture of an object without seeing it. Pretty tight at fingertips

Sonogram

intensity = amplitude frequency = number of zero cross by time

Our auditory system

is hyper-sensitive in the 2-5k range --the range of human speech -- the lines dip down so we need less sound intensity to report hearing a more intense sound through evolution our auditory system have tuned themselves to be very sensitive to a particular frequency range - 20 Hz to 20,000 Hz - 30 Hz creates tissue damage - dog and bat range beyond 20k Hz At low frequencies we are hypo (poor) sensitivity to amplitudes of sound at low frequencies Need 120 dB of sound intensity at 20 Hz to perceive 90 dB of loudness -1k to 2k Hz our hearing is pretty accurate - The dip in sensitivity reflects in our hyper sensitive tuning We only need about 2-3 dB of sound to report hearing a 10dB sound dB is an exponentially rising unit.

PPA

parahippocampal "place area" Ex: of fMRI that has identified some "categorical"visual areas

Third modality of somatosensory system is

proprioception, and includes receptors for muscle stretch, joint position, and tendon tension. This modality primarily feeds the cerebellum, which depends on immediate feedback regarding the orientation of the body. -- balance - helps you detect where your body is in space - if you close eyes and stretch out arms you know where your arms are in space - Proprioception is susceptible to alcohol. -- not exactly sure why -Ex: finger stretched back to far will signal huge pain response that isn't necessarily pain but joint recognizing that it's reaching the end of it's limit of travel Cerebellum manages constant error correction and feedback control of complicated multi jointed actions

Premotor (PM) cortex activity

reflects the preparation for and intention to act in response to external cues PM activity precedes primary motor cortex activity PM activity ceases when overt movement begins (or is aborted) - in 70s and 80s there was a lot of activity recorded in the animal model interested in basic behaviors - a lot of this work identified big regions of premotor cortex --sensitive to preparatory/planning action but once action begins the premotor cortex drops out, it's job is done and is no longer involved in the actually execution - delay period between cue and signal to move -- rise after cue and continued rise until after move then there is a drop off

lateral geniculate nucleus

relays retinal info to V1 in stages through different layers. evolved function shift to get a quick and dirty estimate of whats in front of us to send to visual cortex much more rapidly by taking advantage of cell size of magno cellular neurons in 2 layers of LGN are physically larger and transmit info more rapidly as a result. Then 10-15 secs later you will get detailed information on what you are seeing. Such as the color, what they person is wearing, etc - 6 layers of LGN neurons, all subcortical stuff - 2 cellular processing layers types: 1. Parvo: slow, detailed: specific, high frequency info, fine sharp contrast in brightness or anything that creates a line generally. - Ex: image that is flat gray but all the edges are highlighted 2. Magno: fast, coarse: "global" -- transmit all low frequency info, global shifts and shapes that makeup your whole visual field

Spectrogram

reveals sound dynamics more clearly intensity = color frequency = y-axis

The auditory system includes two

subcortical circuits to LOCALIZE low and high frequency sounds Pretty poor at identifying where the sound is coming from Crank up the slight difference when sound hits either ear to compare/increase our sensitivity to help us localize the origin of the sound For LOW frequencies, the medical superior olive (MSO) uses a coincidence detector mechanism. -- < 3k Hz use MSO - when the coincidence detectors little bundles hears both ears arriving at the same time it has detected a coincidence 5 stops to MSO the relative position/location of the MSO is what signal is sent upward into the auditory cortex - the position/location flavor is the input for the auditory cortex to let higher areas recognize that the signal should be more rightward or more leftward then cranking up the pickup For HIGH frequencies, the lateral superior olive (LSO) uses a mechanism based on reciprocal inhibition to enhance localization -- > 3K Hz (LSO) - coincidence detection isn't a good way to make a distinction - instead use contralateral inhibition which enhances Left and right inputs exaggerate small difference via contralateral inhibition to determine which side sends auditory info upstream to cortex whichever sound reached the LSO first sends its signal up to the auditory cortex and at the same time sends a signal to the other LSO in the other ear to inhibit detection to enhance its own signal

FFA breakdown

the FFA does show plenty of activity for non-face stimuli too. In fact, the FG might be the least selective visual areas in this study. Study: Puce et al., JNS 1995 - investigated the FG (Fusiform Gyrus) using inter-cranical recording, showed different visual cues, faces, textures, letters. Then looked at different regions of the brain. Reported: there was some favoriting for faces in the FG but it also showed activity to textures and letter strings. So it's not really a face area but a high order visual processing area that is particularly interested in faces.

Color specific activity can be a result of

the illusory presence of color. Use McCollough Effect - the aftereffect of color after long exposure to a color with the same pattern This illusion of color activates "color-sensitive" regions of visual cortex. Thus activity here equates with the perceptions of color - even if no color is present. Perception does not equal reality. Goes from retina to cortical cortex. Then becomes dependent on cortex. This is true of almost any stimulus. Even in primary visual cortex, activity reflects perceived object size not, not actual object size. Context cues can extend V1 activation into anterior areas that represent full field sized objects. (after ~ 100ms) Cues for depth can be very powerful.

medial geniculate nucleus

the part of the thalamus that relays auditory signals to the temporal cortex and receives input from the auditory cortex

Blind Spot

the point at which the optic nerve leaves the eye, creating a "blind" spot because no receptor cells are located there - information leaving the back of the eyeball, we fill in with missing information - construct/interpolating information so we don't actually see a black spot

Auditory Illusions: The missing fundamental

we can trick the auditory system and create an auditory illusion. You can do this by removing a fundamental frequency from a sound, artifically removing the strongest component of a sound and just playing someone just the harmonics. So they will report hearing the fundamental frequency - reflects the fact that our auditory systems are trained to always hear this fundamental frequency in the presence of these upper harmonics We aren't use to the upper harmonics without the fundamental frequency and since we're not use to it...when we don't hear it we force it to be there "unnatural" (no harmonics) "natural" "unnatural" (no fundamental)

Functional job description size:

y - % signal x - visual angle of stimulus V1: extremely sensitive to size of an object, lots more activity when it takes up a larger amount of visual area or visual angle, not worried about high order categories, only there to represent simple physical features that are present then feed that into later stages of the visual system which do the work of integrating the systems to create object or category being represented V4: less sensitive to this LO: barely cares, but cares a lot from 8-12 which is small to medium size, after 12 medium mark there is a slight enhancement


संबंधित स्टडी सेट्स

Chapters: 8,14,15,16,17,18,19 - Review Questions

View Set

Nursing Leadership & Management Final

View Set

Pediatrie - dr Dracea C2 - Nou-născutul - puericultură

View Set