PSYC222 UNIT 2

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

amplitude + frequency

- Amplitude (sound feature) → loudness (psychological aspect) - Sine wave representation of the pressure waves above → also measures the "intensity" of the sound - This is measured in Decibels *As humans, we can hear a wide range of sound intensities *20 dB = sound of leaves rustling... 40 dB = quiet library... 90 dB = heavy truck... 135 dB = jet take-off *Anything above 140 dB will be a "painful" sound (i.e., passed our threshold) Frequency (measured in hertz) *Frequency (sound feature) → pitch (psychological aspect) *The number of times per second that a pattern of pressure change repeats (top to bottom over and over) *Hertz = the unit of measure for frequency, where 1 Hz equals one cycle per second E.g. air pressure in a 100 Hz sound, goes back and forth 100 times Used to tell us about loudness and pitch

Relative size (monocular depth cue)

- As an object is further away, the image projected onto retina is smaller - Things closer, project a larger receptive field onto retina - We compare sizes of two objects with the assumption of not knowing the absolute size of either one

random dot stereogram

- Bela Julesz created random dot stereograms - thought instead of object recognition first, stereopsis occurs and provides a cue to help us identify objected - created the random dot stereogram --> if you're able to free fuse, you can see an object that pops out (ability is particular to the individual) - i.e., demonstrated stereopsis occurs early in visual perception and can define objects

modifying activity in dedicated brain regions (e.g. Faces vs Houses)

- Different regions attend to different types of objects - FUSIFORM FACE area: activated when looking at faces - PARAHIPPOCAMPAL place area: activated when looking at houses, cities, areas, etc - How is this related to attention? - If you're asked to pay attention to a particular stimulus (house or face), you can activate a particular brain region (in the temporal lobe) and do this

Newsome & Pare (1998) study with correlated dot displays

- Global motion coherence task where monkeys are asked to look at a display with dots that are moving in a certain direction - If all dots are moving in one direction, there is 100% motion coherence (can also have 50% and 20% motion coherence relating to the portion of dots that are moving) - Note: monkeys and humans are generally pretty good at detecting this - Found that monkeys could detect motion with coherence as low as 2-3% - After this, there were lesions to area MT (middle temporal area) which resulted in a reduced ability to detect correlated motion

how sound vibrations are transduced

- In cochlea - Begins to process the amplitude and frequency information to begin to perceive sound loudness and pitch Higher amplitude sound - As amplitude sound increases, there is a larger bulge in vestibular canal - Tectorial membrane shears across Organ of Corti more forcefully - Hair cells are bent more so more neurotransmitter released - Causes higher firing rate of AN fibers - Thus, the larger the amplitude of the sound, the higher the firing of the neurons that communicate with the brain Sound Frequency: - This is more complex... there are multiple mechanisms to perceive frequency - TEMPORAL Code - movement of cochlear partition occurs in pattern reflecting frequency - PLACE Code - movement is closer to base of cochlea for high frequency sounds and closer to apex for low frequency sounds

frontal eye fields

- In the premotor cortex of the frontal lobe - Also important for saccadic eye movements - Every time you move your eyes there is motion

Yarbus (1967) - Repin's 'They did not expect him'

- Look at our we process visual scene → studied cognition - When you present individuals with a scene, our eyes go to the "interesting" parts (which is specific to the individual) - When presented with a face: we look at the eyes and then the nose / eye path region ("inverted triangle") - When presented with a more complex scene: what guides the eye movement is what one is trying to "get out of the scene" Experiment - 3 minutes scanning a copy of Repin's picture with various instruction → free examination - Estimate material circumstances - Give age of the people - What were the family doing prior to the arrival of the visitor? - Remember the clothes worn - Remembers the positions of the people and objects the room - Estimate how long the visitor had been away from the family - Point is: our saccade eye movement system is related to what we want to get from the world **we have top-down control over our visual system

C.S. Green & D. Bavelier

- Looked at whether playing video games affects visual attention in any ways - Look at "avid action video game players (VGP)" and compare these individuals to "non video gamer players (NVGP)" on many tasks - Found that VGP had better visual attention - Problem: people could have had better visual attention before playing video games... so we cannot make this causal conclusion - Thus, in 2007, they trained individuals and then gave them tasks *E.g. of tasks: visual crowding task (note: this is phenomenon that it's more difficult to identify a target with distracting objects nearby) *Found VGPs performed better on all different tasks (same finding as previous study) *What was new: those who didn't play video games were split into two groups → some were given training and some were not *New findings: those who had training had better / improved visual spatial performance (and thus, it can be concluded that video games do in fact improve performance visual spatial performance)

picture recognition performance

- Memory for Pictures (Shepard 1967, Standing et al., 1970) - Looking back at the stages of the perceptual process: recognition - Contradiction in literature *Finding that people are very good when asked to remember a large number of pictures presented to them (really good memory of photo recognition) *Findings: Shepard (1967) - 98% correct at recognition of 612 pictures (90% one week later) Standing et al. (1970) - 85% at 2500 pictures

Occlusion (monocular depth cue)

- More likely to be seen as overlapping objects - Unlikely to be due to an accidental viewpoint (not likely to be puzzle pieces) - Used as a cue to tell us how close or how far objects are --> gives us a sense of depth

comparator and efferent copies

- Motor system sends two copies of each order to move eyes - One copy goes to eye muscles - Another (efferent copy / corollary discharge signal) goes to the comparator (hypothetical structure) *Sends a signal to the visual cortex to alert it that an eye movement has occurred (i.e., that you are about to make a movement) - Comparator compensates for image change caused by eye movements - How do you tell the difference between an image moving on the retina due to real motion compared to our own eye movements *This is explained by the fact that every time an eye movement is made there are two motion orders being sent from the motor cortex *One copy goes through the cranial nerve and tells you to make a saccade *The efference copy goes to the comparator which compensates for image change that is caused by eye movements

involuntary eye movements

- Optokinetic Nystagmus: when our eyes involuntarily track a moving object - if an object starts moving, it may grab our attention - Vestibulo-ocular Reflex: a reflexive rotation of the eyeballs to compensate for head movements - Microsaccades: an involuntary small movement of the eye when we are trying to keep our eyes still (small saccades) - important for refreshing a visual image - Drift: smooth slow movement of an eye when we are trying to keep them still (to again, refresh visual image and keep visual perception)

fundamental frequency

- Role in perceived pitch - Relationship to other frequencies in harmonic sounds

pure tones

- Simplest sound wave (which is a single sine wave) - When you hear a tone, air molecules are pushed along until the air near your ear is pushed on and then you can hear a sound *Air can support many pushes of waves at once *Various pure tones can mix and be sent through the air at the same time - Complex sounds are combinations of pure tones (musical instruments, human speech, city traffic, etc)

voluntary eye movements

- Smooth pursuit: want to follow a moving objects (i.e., eye movements when tracking a moving object) - Vergence: focusing on something close - converge eye (rotate eye inward) or focusing on something far away - diverge eyes (rotate eye outward) - Saccades: fast jumps of the eye that shift our fixation point from one spot to another (172,800 per day)

sharper tuning at the neuronal level

- Some neurons have sharper turning because of attention → limits orientations that it is firing for - When responses to non-optimal orientations diminish, the tuning curve becomes more narrow and selective (attention could make it easier for the neuron to find a weak vertical signal amid the noise of other orientations) - An effect of attention on the response of a neuron in which the neuron responding to an attended stimulus responds more precisely - Attention can change the preferences of a neuron - Neuron will respond to more limited set of stimuli

Familiar size (monocular depth cue)

- Sometimes we know what size some objects are... can use top-down processing - e.g. we know that pennies are always the same size so we interpret the smaller pennies as being further away than larger ones - i.e., differences in size tells us about depth

selective attention at the neuron level

- Somewhat hypothetical in humans (as they require single-cell recordings in V1) - Idea is that if you have a V1 neuron that is sensitive to a particular feature, and you are trying to pay attention to that particular feature, the V1 neuron can change for better attention

characteristics of sounds waves

- Soundwaves are created when objects vibrate → vibrations cause molecules in the air to vibrate → as molecules vibrate they create changes (waves) in pressure - Sound waves are funneled into our ear to process - Compression and expansion of air molecules produced by the tuning fork's vibration - Easily bend around objects (light waves travel in straight lines) - Can also bounce against objects (if you're in s small room, the sound will be louder) - Directly hear sounds produced by objects (light reflects off objects) Slower (than light waves) - e.g. why there is a delay of hearing thunder after seeing lightning - In some unique situations, objects can move faster than the speed of sound which produces a visual traces and a auditory one which is called a sonic boom - Speed depends on the medium that it travels in

harmonic spectrum

- Spectrum of a complex sound in which energy is at integer multiples of fundamental frequency - These are the most common types of sound we experience (i.e., how complex tones can be described) E.g. human voices and musical instruments - This is what gives us the perception of the pitch & is determined by fundamental frequency - Natural property of how humans and instruments produce their sounds - The additional frequencies in the harmonic spectrum are at integer multiples of the fundamental frequency → the additional frequencies are harmonics

superior colliculus

- Subcortical structure (just below the thalamus) - midbrain structure that controls saccadic movements - Active in vision and visual-motor coordination - Important for eye / head coordination

attack & decay of sounds

- The attack and decay of sounds are also important to be able to distinguish different sound objects - Attack = how the sound begins - Decay = how the sound ends E.g. violin sound has the same sound object w same amplitude and pitch but have two different experiences because of the attack and decay

pathway through the brainstem to the cortex

- The cochlear nucleus: where AN fibers synapse (the first brainstem nucleus) - Superior olives: an early brainstem region in the auditory pathway where inputs from both ears converge - Inferior colliculus: a midbrain nucleus in the auditory pathway - Medial geniculate nucleus (MGN): the part of the thalamus that relays auditory signals to the temporal cortex and receives input from the auditory cortex (in temporal lobe)

cochlear partition

- The combined basilar membrane, tectorial membrane, and organ of Corti (which are all together responsible for the transduction of sound waves into neural signals) - Waves cause tectorial membrane to sheer across organ of Corti, causing hair cells to bend in one direction and then another - Where hair cells are located (and function) - Information about sound is sent to the brainstem via the 8th cranial nerve (Auditory Nerve; AN)

tympanic membrane

- The eardrum - A thin sheet of skin at the end of the outer ear canal that vibrates in response to soundwaves

global motion detector

- The solution to the aperture and correspondence problems!! - All motion detectors converge to a cell or group of cells that can monitor them all - Will determine what is actually happening through small motion detectors - Area MT (Middle Temporal Area) MT = middle temporal area = proposed location of our global motion detector

vestibular canal

- The third of the fluid filled canals in the cochlea - Extends from the oval window at the base of the cochlea to the helicotrema at the apex / scala vestibuli (upper part of cochlea)

altered tuning at the neuronal level

- The tuning curve shifts so that the neuron is more responsive to certain stimuli (tilt aftereffect is an example of this → you can see an altered tuning because of selective adaption) - Neuron shifts its attention

oval window

- The window into the ear (inner ear) - Flexible opening to the cochlea through which the stapes transmits vibration to the fluid inside - Membrane forming the border between the middle ear and inner ear

illusory motion in static images

- There is no actual motion in the rotating snakes image, but motion is expressed in illusory motion - We don't have a consensus yet on how this works but studies suggest that patters like Rotating Snakes elicit directional responses from neurons in monkey cortex that correspond closely to the directions perceived by humans (i.e., V1, MT - middle temporal - activated in monkeys when snake illusion is viewed) - Eye movements are necessary to seeing motion → if you look straight ahead and don't move your eyes, the motion should decrease - Luminance (brightness) between colors is important here Note: color and hue does not matter (illusion still there when color is replaced)

hair cells

- These are specialized neurons that transduce one kind of energy (sound pressure) into a another form of energy (neural firing) - Located in the cochlear partition - Inner hair cells convey sound information to the brain (via afferent fibers - fibers of auditory nerve) - Outer hair cells receive signals from brain to adjust flexibility of the membranes (via efferent fibers) - There are three rows of outer hair cells and one of inner hair cells - Hair cells in the vestibular organs also report head movements to the brain

uses of motion

- This allows us to navigate in our world - Helps us avoid collisions → we judge time-to-collision very well - Motion is also useful for identifying objects - Biological motion *Motion of living-beings in our environment *Can tell us a lot about an individual: size, age, emotional state, identity *Note this not only refers to humans but dogs and other beings as well E.g. light dot figure

timbre

- This is a complex concept in which explains everything we use that are the same pitch and same amplitude are in fact different - I.e., psychological sensation by which a listener can judge that two sounds which have the same loudness and pitch are dissimilar - Related to the relative energy of different spectral components of sounds

visual neglect

- This is a deficit of spatial awareness - Due to damage of the Parietal Lobe (normally the left side is neglected) *Damage often due to a stroke that affects one side - Might only eat food from one half of the face, fail to acknowledge people on neglected side **individuals often don't acknowledge this - In order to diagnosis *Standard experimental tasks are performed: (1) line cancellation task & (2) copying a picture task *Line cancellation task will demonstrate that lines on the left hand side are not crossed out (i.e., are neglected) *Copying a picture will show that only the objects on the right side are drawn

apparent motion

- This is the illusory impression of smooth movement - If we have objects that are appearing & re-appearing in different places, our visual system detects this as motion - Often still pictures presented one after the other (e.g. a carton)

acoustic reflex

- Tiny muscles around the ossicles help limit movement in the ossicles during prolonged loud noises - A reflex that protects the ear from intense sounds, via contraction of the stapedius and tensor tympani muscles *Stapedius and tensor tympani muscles tense when sound are very loud in order to restrict ossicle movement and muffle pressure changes that might be large enough to damage structures of the inner ear - Acoustic reflex cannot protect against abrupt loud sounds

complex tones

- Two properties: harmonics + timbre - Have multiple frequencies within them (combination of sound waves)

Plasticity in the auditory cortex (Klinke et al., 1999)

- Used kittens that were born deaf - no auditory nerve sensory information that came in - Implanted electrode in congenitally deaf cats which stimulated auditory nerve - Electrode attached to microphone - Cats behaviorally responded to sounds within 3 weeks of training - Space devoted to auditory processing in the auditory cortex increased as well - The size of the auditory cortex became larger as compared to the control animals → due to environmental changes in the auditory nerve, there were changes in the auditory cortex **early experience with auditory function is very important

face-processing

- When we look at faces there is a stereotypical pattern of looking at and focusing on the eyes and nose - This demonstrates that our eyes move to the part of the scene that is salient and informative (usually eyes, nose, face, mouth)

saccadic suppression

- When you make a saccade, the image of the world moves across your retina yet humans don't notice the movement *This is because of saccadic suppression - when you make a saccade, your visual system essentially shuts down *Is it a reduction in visual perceptual awareness - "Why, how, and where in the brain it happens is a matter of ongoing scientific debate" (Krekelberg, 2010) - Our eyes are constantly moving - making a saccade is like your perceptual system blinking *shuts down for a tiny bit of time - To demonstrate: stand in front of a mirror and rapidly move your eyes back and forth from left to right - should be able to see yourself staring back at you but will not see your eyes in motion (not because your eyes are moving too fast, it's because of saccadic suppression) *note: person behind you would see your eye movements

tectorial membrane

- a gelatinous structure, attached on one end that extends into the middle canal of the cochlea, floating above inner hair cells and touching outer hair cells - Taller stereocilia of outer hair cells are embedded in the tectorial membrane and the stereocilia of the inner hair cells are nestled against it

organ of corti

- a structure on the basilar membrane of the cochlea that is composed of hair cells and dendrites of auditory nerve fibers - Made up of a scaffold of cells that support specialized neurons called hair cells

stereoscopes

- allowed people to realize that binocular disparity was important for depth perception (stereopsis) --> it is a depth cue! - stereoscope presents one picture to one eye and a different picture to the other eye - way to project a 2D image in a way that an individual would see a 3D image (static) - using mirrors - i.e., it is a "3D viewer" - modern day stereoscope = 3D glasses (two filters: a red and a blue) --> you are looking at a similar image... the lenses will block one image to the right eye and one to the left eye (so each eye is seeing different image) and then you put them together to make one 3D image

spatial attention (cueing paradigm)

- allows humans to selectively process visual information through prioritization of an area within the visual field

accommodation

- allows us to focus on objects that are close vs objects that are far - lens becomes fatter when we want to see something close while lens becomes thinner to see something further

enhancement at the neuronal level

- an effect of attention on the response of a neuron in which the neuron responding to an attended stimulus gives a bigger response - at a neural level, attention may be able to enhance neuron's responses (A) - neuron increases its firing!!

crayon binocular disparity + corresponding retinal points, horopter, panum's fusional area

- bob is looking at crayons standing up in front of him - he is fixating on the red crayon and thus, the red crayon will fall on same part of retina in both eyes (in the foveal region) - now, we can look at the rest of the crayons in the two retinal images (in the two eyes) - red and blue crayons fall on corresponding retinal points for bob and thus, they have zero binocular disparity - any object that falls on an imaginary circle (horopter) has zero binocular disparity --> where you are fixating, there is an imaginary circle that extends from your retina, through space, to the fixation point - objects outside of the horopter (i.e., slightly in front or behind) can be fused together by the visual system to provide us with binocular single vision (known as panum's fusional area) - any object that is not in the horopter or panum's fusional area can be interpreted as two separate objects creating double vision

feature integration theory (Anne Treisman)

- can think of a visual search as having two stages: preattentive & attentive stages preattentive: visual scene is analyzed in parallel (at same time) by a large set of feature detectors **needed for feature search attentive: binding together features requires attention (needed for conjunction searches) - can lead to the BINDING PROBLEM: The challenge of trying different attributes of visual stimuli which are handled by different brain circuits, to appropriate object so that we perceive a unified object... e.g. if you're asked to find green verticals, it is hard if them if all other items have purple associated with vertical and green with horizontal

"unconscious inference" (Helmoltz)

- combining and weighing guesses regarding possible depth relations between different objects in our visual field - leads to visual system arriving at a coherent, and more or less accurate, representation of 3D space

linear perspective (monocular depth cue)

- conversion as you go further - if you have parallel lines, they go smaller and higher up (on the retina) as they get further until they reach a vanishing point

visual search

- dependent measure = reaction time & error rate - e.g. Where's Waldo - the idea that you have a target that you're trying to find in a display of distractors within the task - target = the goal of the visual search (participant knows where the target is and is supposed to identify whether it is or isn't there) - distractor = any stimulus other than the target (can manipulate the number of distracters) - set size = the number of items in the display - participant task: say whether the target is present or absent in the display --> try to overtly attend to the target to say that it is in fact present types: **feature search - target is defined by a single visual feature (e.g. color, shape, orientation) - just as fast to say whether a target is present or target is absent since we are able to search for many visual features in parallel (can take in entire display and process all features at once) - basic features include shape, size, color, orientation and motion - if the item is sufficiently salient, it should pop out no matter how many distractors **parallel search - processing the color or orientation of all items at once --> a visual search in which multiple stimuli are processed at the same time - many basic features we can search for in parallel including size, color, shape, orientation & motion **conjunction searches - in the real world, we're often not looking at one single feature (e.g. finding your car in a lot) - combining together two different visual properties in order to find the target (so this is what we do often) - this takes longer than feature search -- an additional 5-15ms per feature + additional 10-30ms per distraction to find the target

inattentional blindness

- failure to notice (or report) a stimulus that would be easy to report if you were attending to it e.g. observers keep track of a ball in a passing game and fail to notice a man passing in a gorilla suit

motion parallax (non-static, monocular depth cue)

- further images appear to move slower & closer images appear to move faster - e.g., if you're in a train, objects in front of you move fast while buildings in the back appear more static

binocular summation

- if two eyes both look for the same hard-to-see target, the combination of signals from both eyes makes the performance on many tasks than with either eye alone - may have provided the evolutionary pressure that first moved eyes toward the front of some birds' and mammals' faces

moon illusion

- illusion potentially related to depth perception - measured in units of visual angle, the moon is the same size at its zenith as on the horizon - the moon typically looks much larger on the horizon that when it's high in the sky - this is an illusion because the reality is that the moon is the same size - the moon actually does appear the same size on the retina - when the moon is on the horizon, it is often occluded by objects (which may play a part in the illusion)

correspondence problem for motion

- in reference to binocular vision, it is the problem of figuring out which bit of an image in the left eye should be matched with which bit in the right eye

development of stereopsis

- infants blind to binocular disparity until 4 months (don't combine information until this time) - in monkeys: there are neurons in V1 tuned to binocular disparity in 1-week - possible explanation: V2 neurons are not working at birth or signals are too weak at birth (i.e., some of this is innate and not through experience)

Aerial perspective (monocular depth cue)

- light is scattered in the atmosphere makes more distance features appear hazy and blue - further images appear fainter and less distinct

cueing paradigm

- measure of covert attention - people are told to keep their fixation at a central point and respond when they see a target appear - they are usually responding with the location of the target (left or right) - prior to the target appearing they are cued to one location - if the cue is valid, then the target appears in the cued location --> people are fastest and most accurate in this condition - if the cue is invalid, the target does NOT appear in the cued location --> people are slower and less accurate in this condition - this shows that visual attention can be spatially based (meaning we can direct our covert attention to a given spatial location) - the cueing paradigm gave us the "spotlight of attention" metaphor

spotlight of attention metaphor

- metaphor for attention - idea that our visual spatial attention sweeps across our visual field like a spotlight --> our point of fixation sweeps across a space - useful heuristic (i.e., not really accurate) - from the cueing paradigm

ossicles

- middle ear - Ossicles are the smallest bones, the muscles that move them are the smallest muscles - One way to amplify sound: Joints between bones make them all work like levers (small force on one end results in larger force on the other) - A second way to amplify sound: They move from the large surface (the tympanic membrane) to a small surface (the oval window) → this change in surface area amplifies sound - Basically, ossicles take sound waves and amplify them - Malleus: connected to the tympanic membrane on one side and the second ossicle (the incus) on the other - Incus: connected on one side to the malleus and on the other to the stapes - Stapes: the most interior of the three ossicles - connected to the incus on one end, presses against the oval window of the cochlea on the other end --> transmits the vibrations of sound waves to the oval window (another membrane that forms the border between the middle and inner ear)

Relative height (monocular depth cue)

- objects further away, appear higher up in the retinal image (note: this often works with familiar size; higher up + smaller) - relative height and relative size **work in concert to tell us what's going on in an image

Top-Down Control of Eye Movements

- own personal experience can influence what we look at and focus on - face processing + Yarbus study

change blindness paradigm

- people often don't notice a small change in a scene (when their task is to look for it) - suggests that people don't encode every aspect of a scene that they're currently viewing Techniques - Flicker paradigm: photo flickers and an object disappears and appears after each flicker - Gradual change: look at a photo (scene) with things gradually changing - Eye-contingent change: change once the eyes have scanned over the object that is going to be changing

ponzo illusion

- related to linear perspective: the two lines above are actually the same exact length... as parallel lines extend into space, they get closer together as they are further away - zollner illusion and hiring illusion are also related to linear perspective depth cue - general takeaway: sometimes our perceptual system guesses wrong!

strabismus

- related to the development of stereopsis - 3% of human infants are born with two eyes that don't point at the same point in the world - two types: esotropia = one eye point toward nose (cross-eyed) exotropia = one eye pointed to the side - early onset strabismus can have serious effects on developing visual nervous systems and on visual performance - you need to develop binocular disparity by 4-6 months generally or you don't have binocular disparity

gist of scene relative to spatial frequency

- similar gist means similar spatial frequencies - visual pictures and scenes with similar meaning are also similar in aspects of spatial frequency --> similar pictures with similar gist have similar spatial frequencies - this allows us to recognize a picture at a global level (face, city road, etc) because of spatial frequency component & experience seeing things - time and attention are needed to identify aspects of a scene - Intraub (1981) had participants monitor a stream of pictures (using RSVP) for the presence of a particular scene *Participants could do this when the pictures were presented for only 125 ms! *In change blindness experiments, it is usually the local aspects of the scene that are being changed (higher spatial frequencies) --> need more time

eye movement muslces

- superior / inferior oblique - superior / inferior rectus - medial / lateral rectus

binocular disparity

- the differences between the two retinal images of the same scene - disparity is the basis for stereopsis, a vivid perception of the 3D of the world that is not available with monocular vision - smaller binocular disparity = object is closer to you - larger binocular disparity = object is further from you

acoustic spectrum

- the distribution of energy as a function of frequency for a particular sound source - range of frequency and Decibels levels where we can perceive sound - human range of hearing (20-20,000 Hz) *As we age, the range of frequencies gets smaller *Bats can hear up to 50,000 Hz while elephants have a much smaller spectrums - there's an interaction between the physical properties of sound (frequency) and the ability for humans to hear (sound pressure level)

vergence (convergence & divergence)

- the small rotations of the eyeballs that we make when we are trying to focus on near or far objects - tiny movements of the eyes can be used as depth cues - convergence: rotate eyeballs inward to see something closer - divergence: rotate eyeballs outward to see something further away

stereopsis

- the technical term for binocular perception of third dimension (depth) - it is special in that it can provide very high-resolution depth information in the absence of other cues - while it is not a necessary condition for depth or space perception, it does add richness to perception of the 3D world

object-based attention (variation of the cueing paradigm)

- there are two rectangles: fixate on star and say where rectangles appear - the cue is the bolded line which could be one of the four choices: top/bottom and left/right - cued location is always better than the uncured location: performance on A is best (spatial attention) - found that performance was actually better if the target appear in point B (on the rectangle) relative to point C (on another rectangle) --> due to object based attention - suggested that in addition to visual attention being spatially based, it is also object based **attention drifts to object

Bayesian Approach

- this put the "unconscious inference" into a more rigorous mathematical footing - our prior knowledge (of depth cutes) can influence our current interpretation of an event (of how close or far images are) - sometimes the best way to see this, is by seeing when this breaks down (e.g. illusions)

aperture problem for motion

- when a moving object is viewed through a small window (an aperture), the direction of motion of a local feature may be ambiguous - If we think of all of our neurons / receptors, they all have small receptive fields that we can think of a small window into our receptive world

Texture gradient (monocular depth cue)

- when relative height and size work together, we get a better indicator of depth - objects further away appear smaller and more densely packed

stereograms

- you may also be able to free fuse stereograms --> free fusion is the idea that you don't need a stereoscope to experience the world in 3D... can cross your eyes hard to see an image in 3D - another way to see static, pictorial, 2D images in 3D - it was originally assumed that stereopsis occurred late during visual processing --> first recognize the object + match part of the two images on the retinas

Rapid Serial Visual Presentation (RSVP)

Can tell us about visual attention... the task: - You'll be presented with a series of letters, one at a time - Name the letter that is presented in red as soon as you see it - With large, clearly visible stimuli we can reliably pick an X out of letters when the characters are appearing at a rate of 8-10 items per second But what if you need to identify two targets, T1 and T2 in the RSVP stream? (e.g. T1 = white letter in stream of black, T2 = letter X) - ATTENTIONAL BLINK: miss the second target if it's too close to the first target (our attention "blinks" after we identify a target similar to how we physically blink our eyes) --> 200-500ms pertains to attentional selection - once the participant shifts from searching for T1 to T2, they pay not attention to T1 anymore (the white letter is now irrelevant) - your attention can also dip while looking for V1, and you can miss V2

attentional selection

Don't change point of fixation but you can still attend to / select to a particular feature

Pressnitzer, Graves, Chambers, de Gardelle, Egré (2018)

Examined the auditory laurel/yanny social illusion (i.e., don't realize it happens unless you talk to other people) that gained popularity a few years back - The social illusion has an ambiguous auditory stimulus (similar to "The Dress") Explored how the frequency of the stimulus impacts perception - 289 participants completed an online survey where they would be asked to choose between "Laurel" and "Yanny" for 11 sound clips presented 3 times each in a random order + a confidence rating was recorded - The sound clip = directly in the middle is an auditory spectrogram of the original sound clip - The clips to the left of the middle are progressively low-pass filtered, so that the higher frequency components of the sound are reduced - The clips to the right of the middle are progressively high-pass filtered, so that the lower frequency components of the sound are reduced Results - Participants were split into categories related to whether they almost always heard "Laurel", almost always heard "Yanny" or whether their perception changed with the stimulus (Intermediate group) - Most were in the intermediate group - Note: that all participants were very confident in their response (i.e., perception of the sound) - For participants in the intermediate group, the low-pass filtered versions were more likely to be heard as "Laurel" and the high-pass filtered versions were more likely to be heard as "Yanny" Conclusions - The authors concluded that with the original ambiguous clip, listeners are likely "perceptually emphasizing" different parts of the frequency range - Also, effects of listening equipment and some individual differences - As the authors note: "Perception is not a passive registration of external information....perception must make inferences, which are often unconscious" **it is active and involving guesses most of the time

illusory conjunctions

Example of binding error... Demonstration - A display with letters of various colors will flash on the screen, write down as many letter/color combinations as possible - During this you might experienced an illusory conjunction in which you recall erroneous (wrong) combinations of features presented in a display (e.g. you say there's a red X, however there are red letters and Xs but not these features together) - Thus, these are false combinations of the features from 2+ different objects → we conjoin the features that are in the display

first-order vs second-order motion

First-order motion: luminance defined objects that change position over time *all of what we've talked about Second-order motion: texture of contrast-defined objects that change position over time ** allows us to identify objects through movement - Reminiscence of random-dot stereograms (not objects seen until we can see it in 3D) - No object is moving until we are able to see the motion - Just as random dot stereograms prove that matching discrete objects across the two eyes is not necessary for stereoscopic depth perception, second-order motion proves that matching discrete objects across movie frames is not necessary for motion perception

middle ear

Function: amplify the sounds to then be processed and transduced in the inner ear Tympanic Membrane + Ossicles

outer ear

Function: funnel in sound waves of our environment Pinnae - Where sound is first collected - The outer funnel-like part of the ear - Sounded funneled by pinnae into ear canal - Fleshy Ear canal - Takes the sound waves in and funnels them into the portions that will process and transfer them for us - The canal that conducts sound vibrations from the pinnae to the tympanic membrane and prevents damage to the tympanic membrane - Ear canal length and shape enhances sound frequencies and protects the tympanic membrane

cranial nerves dedicated to eye movements

III - Oculomotor: send signals to all except superior oblique & lateral rectus IV - Trochlear: send signals to superior oblique VI - Abducens: send signals to lateral rectus

motion agnosia (Akinetopsia)

Inability to perceive motion through visual inputs (i.e., motion blindness)

misdirection during magic (Kuhn & Teszka, 2018)

Key principles in magic is misdirection of attention The study - Explore misdirection in both children and adults during the lighter drop magic trick - Both overt and covert attentional mechanisms were explored - Kids under 10 and adults - Previous studies indicates that 10 is when you start developing attentional controls - Using magic trick can measure overt attention (looking at one thing) and covert attention (visual search) in a more natural setting - Participants asked to identify what the magic trick was and if they could figure out what happened (not naive) - Given a task to tell us what happened (more direction - i.e., they know there will be a trick) - Measure eye tracking movements (objective measure) + asked what they saw happened (subjective measure) Results - Children were more likely to miss the lighter being dropped compared to adults (68% children vs 42% adults) - Adults were more likely to fixate on the hand dropping the lighter compared to children (22% adults vs 0% children) - Results suggest that children under 10 are less able to inhibit attentional misdirection (both overt and covert) - Note the following difference: Overt = wherever your directly fixation Covert = ability to pay attention that you are not directly fixated on

missing fundamental effect

People perceive the pitch of the sound as unchanged even though the lowest frequency component is absent

auditory cortex (in temporal lobe)

Primary Auditory Cortex (A1) - The first area in the temporal lobes of the brain responsible for processing acoustic information - Located in the back of the temporal lobe - Processes fundamental sound attributes (responsive to pure tones and other stimuli) Belt Area - A region of the cortex directly adjacent to A1 with inputs from A1 where neurons respond to more complex characteristics of sound Parabelt Area - A region of cortex, lateral and adjacent to the belt area, where neurons respond to more complex characteristics of sounds, as well as to input from other senses

Motion Detection Circuit (i.e. Reichardt Model of Motion)

Requirements for an effective motion detector (i.e., how local motion detectors may respond in V1) - Two adjacent receptors (e.g. neurons 1 and 2), separated by fixed distance - Looking at ladybugs, a bug (or spot of light) moving from left to right would first pass through neuron 1's RF and then a short time later would enter neuron 2's RF → a third cell that "listens" to both neurons should be able to detect this movement Lady bug e.g. - When an object (like a ladybug) moves, it's logical to suppose that the object is perceived by the RP of separate but adjacent neurons 1 and 2 - motion-detection neuron M responds identically to a single moving ladybug and to two separate stationary ladybugs - to distinguish motion, the circuit needs additional neurons - The first delays neuron 1's input to M, while the second (X) fires only when both neurons (1 and 2) are stimulated → this combination of inputs allows M to detect motion. In detecting longer-range motion, a single M cell fires continually as the bug moves across the RF of fives distinct neurons

psychoacoustics

The field that investigates the perception of sound using a psychophysical approach Perceptions of loudness: - While it's related to amplitude, there are other things related to loudness - Frequency matters... equal-amplitude sounds can be perceived as softer or louder depending on the frequency (high frequency sound = louder and vice versa) - Timing also matters... longer sounds tend to appear louder than shorter sounds Perceptions of pitch: - Listeners perceive a greater rise in pitch for low frequency sounds (even though the amount is the same) Examines how listeners identify sounds in noise (masking) - Always noise in our environment - so how do we perceive sound in a noisy background - White noise - broadband noise including equal energy of every frequency in the human auditory range (so every frequency you can perceive is in white noise) → can still hear background noise behind this Also concerned with temporal resolution - Sound duration discrimination (how long is this sound?) - Sound gap detection (how long is the silence between sounds?)

inner ear

The purpose of it: - To transduce sound pressure (waves) into neural signals - Consists of many structures that allow us to take sound energy and convert it into neural energy - Filled with three fluid-filled canals: vestibular, middle, tympanic The major parts of it - oval window - vestibular canal - cochlear partition - basilar membrane - tectorial membrane - organ of court

motion aftereffects (MAE)

There are opponent processes in motion detection (similar to color detection): up-down, left-right, expansion-contraction E.g. waterfall demo

basilar membrane

a plate of fibers that forms the base of the cochlear partition and separates the middle and tympanic canals in the cochlea

auditory nerve

beyond the ear - Afferent and efferent fibers - Characteristic frequencies Low, Mid-, and High Spontaneous fibers - 10-30 AN fibers per hair cell **i.e., hair cells sending to many AN fibers - Low spontaneous fibers: Low rate of spontaneous firing, requires more intense sounds **analogous to cones - High spontaneous fibers: High rate of spontaneous firing, very sensitive to sounds **analogous to rods - Mid-spontaneous fibers: In between the low and high fibers - mid range intensity levels - In sum, auditory cannot listen to a single nerve fiber → has to consider entire patterning of cells

overt attention

eye movement to attended object (to pay attention to one thing)

covert attention

no eye movement (peripheral vision) - something in the visual field but not focusing your attention to it

physiology of stereopsis

occurs in the v1 of our visual system in order to occur, there have to be neurons receptive to both retinas binocular neurons: these neurons have two different receptive fields (one for each eye) - in V1, the receptive fields tend to be similar (same properties of receptive fields) - some binocular neurons respond most when an image is on the same part of the retina (this is what gives us the basis for horopter) - some respond most when images occupy different retinal positions (these are the binocular neurons that are sensitive to binocular disparity)

valid, invalid, neutral cues

valid cue = box appears where target appears / the cue is at the point of fixation (giving correct information) --> these speed RT (response time) relative to neutral and invalid cues invalid cue = box appears where target doesn't appear / the cue is not on the point of fixation (e.g., cue on the right, fixation on the left) neutral cue = uninformative **cue is grabbing covert attention to particular spatial location


Set pelajaran terkait

EASA Part 66 : Aerodynamic Question2

View Set

Lección 7: Recapitulación y Prueba de práctica

View Set

Relational Database Management System (RDBMS)

View Set

Behavior Analysis and Management

View Set