Perception

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

Describe how a cochlear implant manipulates sound

A cochlea implant serves to present different waveforms to different electrodes, to mimic the frequency analysis of the normal mammalian cochlea. It is analogous to a filter brank whereby the normal auditory system filters sound by its frequency through the basilar membrane. A cochlear implant achieves this through two methods. The current analogue method makes use of simultaneous stimulation of continuous analogue waveforms to four electrodes. However, this causes interaction between channels by conduction of electrical current, resulting in distortion of spectrum information. Thus, most cochlear implants make use of the continuous interleaved sampling method. It has non-simultaneous pulsatile stimulation to minimise electrode interactions, with pulse amplitudes modulated by the envelope of the bandpass filter outputs

Give a brief description of the principle of univariance as it relates to cone cells.

A cone responds to light anywhere within its span of sensitivity, but it can only respond through a change in its baseline activity — signaling more or less. However, stimulus can vary across two dimension: intensity and wavelength. Thus, the cone output will signal "more" as the light becomes more intense, or as the light wavelengths move closer to its peak sensitivity, but all wavelengths, and all changes in light intensity, can produce the same "more" or "less" response. This is the principle of univariance. Therefore, a single cone cell cannot discriminate changes in intensity from changes in wavelength. In other words, one and the same visual receptor cell can be excited by different combinations of intensity and wavelength. For example, a bright light of non-preferred wavelength might lead to same response as dim light of preferred wavelength. Thus, the brain cannot know the colour of a certain point of the retinal image. The L, M or S cones by themselves are completely colorblind Such a phenomena can be observed in monochromats, who only have one type of cone cell. Meanwhile, a normal human visual system can get around this to allow the perception of colour, which is to compare response across multiple cone types (red, blue and green).

Explain how sound is transmitted in the inner ear

Acoustic energy enters the cochlea through the oval window, thus causing the basilar membrane (BM) to vibrate up and down. The movement of BM is sensitive to different sine waves, where different parts move at different frequencies: the lower frequencies vibrate at the apex, while the higher frequencies vibrate at the base. This vibration causes a shearing force between the tectorial membrane (TM) and the BM, causing the hair cell stereocillia to bend back and forth. The movement of the inner hair cell (IHC) results in the depolarisation of the cell, causing the release of neurotransmitters into the auditory nerve. On the other hand, movement of the outer hair cell (OHC) results in the amplification of movement of the BM, thus helping to detect sounds of lower amplitude which might not vibrate the BM sufficiently to trigger a response

How does spatial vision differ in the periphery vs. the fovea?

Acuity is worse at the periphery compared to at the fovea. Resolution decreases rapidly with distance from the fovea as retinal cone density decreases, causing us to be less able to detect luminance (spatially opponent cells to detect edges of the contrasts bars) Demonstrated through how letters in the periphery are harder to read than in the fovea (Anstis, 1998). Though the decrease in resolution can be overcome by increasing letter sizes with increasing eccentricity in the visual field. This can also be demonstrated using a CSF, where vision in the periphery have lower acuity and have less contrast sensitivity at lower SFs (Rovamo et al., 1978) In addition, visual crowding also increases, where there is impaired recognition of objects in a clutter. This is not a limitation of acuity as it affects objects that are otherwise visible in isolation. It has been found that observers are unable to report the target orientation, but can report average orientation (Parkes et al., 2001). Involves pooling of target and flanker identities - perhaps to simplify visual input. This differs from foveal processes where differences are emphasizes. This might potentially be due to the increasing RF as we move from the fovea to the periphery.

Describe the role of the medial intraparietal area (MIP)

Another egocentric frame of reference in the brain is the reach-centred frame of reference where vision and action are coordinated. Such a representation was thought to be encoded by the neurons in the medial intraparietal area (MIP). Neurons in MIP specifically discharge dependent on the direction of hand movements towards a visual target. For example, increased neuronal activity was recorded in MIP when monkeys used a joystick to guide a spot to a visual target presented on a computer monitor (Eskander & Assad, 2002), which indicates that MIP neurons are involved in the coordination of hand movements and visual targets. The activity was much less when monkeys just watched a play-back version of a trial without moving the joystick. It has been claimed, accordingly, that MIP neurons transform the spatial coordinates of the target to be reached (e.g. an object) into a representation that can be used by the motor system for computing the respective movement vector (Cohen & Andersen, 2002). These coordinate transformation processes may take place even before an arm movement is initiated and such cross-modal representations of the spatial relations between a motor effector (e.g. the hand) and a nearby visual stimulus (e.g. an object) are also likely to contribute to monitoring a limb position during an actual reaching movement

Describe evidence for and against the spotlight metaphor of attention, and reconcile the different evidence

Attention was first described as having a focus, margin and fringe by William James. It has been likened to a spotlight that allows us to selectively attend to specific parts of the visual field. The spotlight of attention is known to move through space (Posner et al., 1980) and in an analogue fashion (Shulman et al., 1979). They found that when a cue was presented to orient attention to the target on the right, reaction time was faster to a target that was nearer to the cue compared to a target that was further from the cue. They also found that the movement was continuous and did not involve suppression which was found in saccades. Eriksen and Hoffman (1972) proposed that the spotlight had a fixed size, and demonstrated distractor interference only within a certain radius (estimated 1 degree of visual angle). Thus, information is only processed if it falls within the beam of spotlight. However, LaBerge (1983) found that the width of the spotlight can be varied, where he manipulated the spread of attention towards a five-letter word. When participants focused on the central letter, responses to that letter were quicker than other letters. But when participants focused on the entire word, responses to all letters were equally quick. Furthermore, Treisman et al. (1983) demonstrated that the spotlight might be object-based rather than space-based, when they found that the time taken to read the word aloud and indicate gap position in the frame differed when the word was placed in the frame and out of the frame, contrary to the spotlight's prediction. A possible reconciliation was offered by Tsai and Lavie (1988; 1993) whereby they demonstrated that even though visual selection can be based on other stimulus properties, it is still conducted by attending to the location of the selected object. They demonstrated this whereby participants responded to a target that was presented in either the same location, but of a different colour, or vice-versa. They found that RTs are faster to detect letters in cued positions compared to cued colour.

Describe the mechanisms behind synaesthesia

Bargary and Mitchell (2008) demonstrated that there is generally a difference in neural connectivity between inducer areas and concurrent areas. However, whether the connectivity difference is due to additional structural connections or functional disinhibition of existing connections remains debated. Rouw and Scholte (2009) found that there was extra structural brain connectivity in letter synesthetes. As synaesthesia runs in the family, perhaps the formation of extra structural connections in brain development is due to genetic variability. Spector and Maurer (2009) suggested that synaesthesia might result from less pruning of connections between neurons in the brain, compared to normal subjects. However, Merabet et al., (2008) demonstrated that after wearing blindfold, the visual cortex is recruited for braille reading, suggesting that there is pro dis-inhibition of existing connections instead.

Evaluate Biederman's structural description model

Biederman (1987) proposed the system that relies on 'non-accidental properties' retained in any 2D projection, coming up with an alphabet of 36 Geons. There were numerous evidence for the existence of Geons. Firstly, Biederman demonstrated behavioural evidence showed that recognition is more impaired if Geon-related information are obscured (e.g. vertex) compared to equal amounts of contour information. Tanaka (2016) also showed that IT neurons are equally sensitive to complex shapes and abstract versions and often only to parts of objects, providing further support that Geons are the way we process objects. While such models can explain invariance well, and there is strong evidence, it makes it hard to recognise subtly different individuals within a class because of crude representations. Extracting volumes can also be difficult in real images due to issues of occlusion, lighting etc. Human perception and many IT neurons have also been found to be not viewpoint invariant.

Describe how binocular disparity can be used to perceive depth

Binocular disparity is the difference in location of a feature between the left and right eye because each eye receives a slightly different view of the world (Wheatstone, 1838). When fixating on an object, the retinal image appears on corresponding points in the two eyes. Objects that are further from fixation point generate uncrossed disparity, while objects that are nearer generate crossed disparity. This provides depth information relative to the fixation point First place where information from both eyes come together is V1, where many of these cells are sensitive to retinal disparity (Pettigrew, 1972). Demonstrated the existence of far and near cells. Some cells respond most to small disparities (tuned excitatory) and some respond most to large disparities (tuned inhibitory) About 5% of population is stereoblind, but presence of monocular cues means that they can still make depth judgements (motion parallax, pictorial cues)

Describe the mechanisms of the "tilt-after effect" / "simultaneous tilt contrast"

Both make use of the neural adaptation principle, where Gibson (1937) described that the prolonged viewing of one orientation will reduce the sensitivity to that orientation, and produces repulsion in the perceived tilt of dissimilar gratings. Adaptation reduces sensitivity, thus we require a higher contrast Tilt after effect occurs as one particular orientation (tilted) is adapted by the neurons, causing subsequent dissimilar orientation to appear repulsed away, due to accompanying decreases in sensitivity around the adapted neuron This causes a shift in the peak response of neurons, causing objects to appear tilted even though presented upright. Simultaneous tilt contrast operates in the same mechanism, where the surrounding stimulus acts as a neural adaptor and causes the shift in peak responses of the neurons, thus affecting the processing of the central stimulus The above demonstrates adaptation, as well as population coding of orientation

Describe how to determine thresholds for Yes/No methods and forced-choice methods

Both methods would require the use of the Method of Constant stimuli, and generate a psychometric function. By tallying the responses, there is a graded change in responses which produces the cumulative form of Gaussian function (i.e. fit psychometric function to data) Yes/No methods Threshold should be somewhere above 0 and below 100. Midpoint represents "tipping point" between predominantly yes and no (also fastest rate of change) Forced-choice methods With 2AFC design, guess rate is 50%, midpoint (threshold) now taken as 75% correct 4AFC for motion: chance is now ¼ (25%), threshold is 62.5%

What does CSF reveal about spatial vision of humans and animals

CSF defines our window of visibility, where the highest spatial frequency perceived denotes our acuity. Studies about CSF in babies and cats have revealed differences in acuity and spatial perception between adults and them Teller et al. (1974) made use of a preferential looking paradigm containing a two-alternative force choice. The baby is habituated to a reference stimulus card, and if the baby prefers to look at the test stimulus, suggests the baby can still discriminate the contrasts of the test stimulus Demonstrated that babies have lower acuity (lower cut-off) as well as lower contrast sensitivity across all spatial frequencies Bisti & Maffei (1974) measured a feline CSF by presenting gratings on a monitor, where the cat is trained to perform a yes/no task - press a lever when a grating is seen, with milk reward presenting Varied using the method of constant stimuli Found that cats have lower acuity compared to humans, but higher contrast sensitivity at lower spatial frequencies

What is the evidence for multiple and independent SF channels?

Campbell and Robson (1968) hypothesized that the visual system is composed of spatial frequency channels, each sensitive to particular spatial frequencies. Adaptation will reduce sensitivity to contrast, but interested if all channels are affected. Blakemore and Campbell (1969) made use of an adapting stimulus at 7.1 cycles per degree, found that there was a strong reduction in sensitivity at the adapted SF, but also a decreased effect to the surround SFs. Consistent with evidence that there are multiple channels for SF, and SF is determined by population coding (evidence from adaptation and surround effects) However, Schyns and Oliva (1999) found that by pairing high SF image with low SF image, we are not able to access the channels separately. High SF images tend to dominate, although low SF image can still be seen by squinting - demonstrates dominance of particular SF channels

What is the neural basis of cerebral achromatopsia

Cerebral achromatopsia is a type of colour blindness where the world only consists of different shades of grey. Individuals tend of perform poorly on the 'odd-one-out' task, where its defined by colour, though they perform OK when it is defined by luminance. They have a full set of cones - colour blindness probably due to damage to cortical areas Lesion studies on monkeys demonstrated that it is not due to damage of V4 (Cowey & Heywood, 1995) Hadjinkhani et al (1998) felt that V8 might be more important in the role of colour perception Demonstrated that V8 neurons are highly excited by red-green grating, and are active during the colour after effect (although no actual colour was seen). Multiple extra-striate areas believed to be involved in different aspects of colour perception

Why are cochlea implants currently not as good as normal hearing?

Cochlea implants do quite badly at capturing pitch, as it obtains information about pitch purely using temporal code. This results in a pitch limit of around 300Hz (Shannon, 1993), causing people to be unable to recognise melodies very well as demonstrated by Kong et al., (2004). A further problem is that to substitute hair cells, there are probably accompanying neurodegeneration within the brain. Furthermore, incomplete insertion of the electrode commonly occurs, where the electrodes do not extend fully along the length of the BM. This results in spectral shifting, though Rosen et al. (1999) have shown the deleterious effects can be overcome overtime through training. They studied four channel implants in normal listeners and demonstrated that with training, perception of words improved from 0% to 30% following spectral shifting. There is also an issue of frequency selectivity, as there are only 20 frequency channels compared to 3000 IHCs. The current also spreads across the BM, further exacerbating the problem. There is less independence firing across nerve fibres, which might affect temporal coding. And the small dynamic ranges but intensity just-noticeable-differences not corresponding smaller, results in fewer discrimanble steps in loudness.

Explain the mechanisms behind colour constancy and the role of V4

Colour constancy refers to the ability to perceive the colour of objects invariant to the colour of the light source. This is believed to be attributed to the presence of double-opponent cells in the V1 (both colour and spatially opponent). These cells receives input from cone cells of opponent colours, and respond poorly when there is a uniform colour. Thus, they help to detect colour borders and colour contrast as they respond best when (assuming R+G- ON-cell) red light is shone on the centre, and green light in the surround. However, the study by Zeki (1983) demonstrated that V1 merely provides information on the initial stages of colour constancy, but the role of V4 is much more important. He demonstrated that neurons in the V1 and V4 do not fire when viewing a white square, but do so when viewing a green square. However, when a green filter is applied, the same V1 neuron now fires in response to the white square, while the V4 neuron does not. V4 is able differentiate surface colour and the colour of the light that illuminates it

Is colour perception and colour knowledge dissociable?

Colour perception and knowledge are known to be dissociable, by studying patients with brain damage that impairs one but not the other Cerebral achromatopsia - individuals who have impaired colour perception, but intact colour knowledge - demonstrated through intact colour imagery abilities (Shuren et al., 1996), able to tell which 2 out of the 3 named objects had the same colour Colour agnosia - individuals with normal colour perception, but impaired colour knowledge (Miceli et al., 2001), could arrange, recognise and name colour patches, but unable to tell you the colour of an object Lesions that lead to colour agnosia were also found to be more anterior in the ventral visual pathway

Describe evidence for multisensory plasticity and whether they rely on existing pathways or create new functional ones

Conner et al. (1997) found that braille reading was more impaired by TMS disruption of the visual cortex in blind subjects, and the somatosensory cortex in sighted subjects. This demonstrated that blind individuals recruited the visual cortex to aid in the reading of braille. Thaler et al. (2011) also found that visual cortex areas were activated in echolocation tasks in blind subjects but not in control subjects. Evidence has also demonstrated that the recruitment of visual cortex probably relies on existing pathways. Merabet et al. (2008) blindfolded sighted subjects and taught them braille for 5 days. They found that after teaching, braille reading recruited the visual cortex and TMS disruption over the visual cortex impaired braille reading. Such an effect disappeared immediately after the blindfold came off. This showed that recruitment of visual cortex by tactile senses can be extremely rapid, thus it is more likely that existing pathways were used.

Describe Selfridge's (1959) pandemonium model

Contains feature demons and cognitive demons. Feature demons respond selectively when particular local configurations (e.g. vertical lines) are presented. Cognitive demons, which represent particular letters, look for particular combinations of features from feature demons. The more features are present, the louder the cognitive demons will 'shout' to the highest level, decision demon, who selects the letter. Thus in this system, individual characters are represented as a set of critical features, and processing of any image proceeds in a hierarchical fashion through increasing levels of abstraction. Actually resembles modern connectionists networks. However, there were numerous criticisms of the model proposed by Marr and Nishihara (1978): template/feature matching requires representation of all possible objects/features from all viewpoints (computationally impossible). The Pandemonium model also fail to capture the overall structural relations, thus confusing F with an inverted 'F'. They also discard all information that distinguishes different instances of the same pattern, as it seeks to classify patterns.

Describe evidence for tonotopy in the auditory cortex

Da Costa et al. (2011) made use of fMRI to demonstrate how the anterior portion prefers lower frequencies while posterior portion prefers higher frequencies. This suggests that as frequency of tones vary, different portions of the auditory cortex are being activated. Neuronal recordings have also demonstrated a high to low frequency gradient when moving across the auditory cortex, replicating the findings of the fMRI studies.

Describe the staircase procedure and its advantages and disadvantages

Derives from the Up/Down Method: Refined by Cornsweet (1962), Basic approach resembles the Method of Limits. Results in a single intensity value where a desired performance level is reached (e.g. 50% 'yes' responses) One down/One up staircase a) Start at (e.g.) high intensity b) Intensity is reduced with each 'yes' until response changes, and raised with each 'no' until it changes to 'yes' Converges on 50% performance - take the average of the reversal points for threshold If the criterion to decrease intensity is stricter, then can target a higher performance level. E.g. if two responses are required in a row to n move down, performance will be above chance Advantages: Very quick - threshold estimate in 40-50 trials or less, minimises issues with habituation/expectation Can target different performance levels (e.g. for 2AFC vs. 4AFC) Disadvantages: Rapid drop to threshold difficult for observers, particularly clinical populations or children. Only returns a single performance level - not ideal if interested in both performance (threshold) and appearance, Method of Constant Stimuli preferable in these circumstance

Describe the detection and discrimination methods

Detection methods make use of the thresholds where the minimum intensity at which a stimulus is "just detectable". Examples include the lowest brightness a stimuli can be perceived, or the slowest speed by which motion is still observable in the stimuli Example experiment: What is the lowest brightness/luminance that can be seen? (Hecht, Haig & Chase, 1937). Measured detection thresholds after different durations in the dark Discrimination methods make use of the threshold where the smallest difference in intensity that is just detectable. Requires comparison between two or more stimuli, or between one stimulus quantity and a standard/reference. Examples include lowest difference in brightness [two stimuli] or orientation observable between stimulus [standard]

Describe the problems with a Yes/No method and what the solution is

Differences in sensitivity: With thresholds, can measure increases vs. decreases in sensitivity as changes in threshold. But yes/no procedures confound the threshold with the observer's subjective criterion, which is the rule we use to translate sensory responses into behaviour. An individual with higher sensitivity will demonstrate a lower threshold compared to an individual with lower sensitivity Differences in criterion: Consider the effect of criterion differences. Someone eager to indicate "yes" (a liberal criterion) vs. someone reluctant to do so (conservative criterion). Impossible to distinguish from changes in sensitivity Solution is to make use of forced-choice paradigm - forcing the subject to choose between two (or more) choices

Describe how frequency is encoded in the auditory nerve

Different frequencies of sounds lead to different spikes of different auditory neural fibres (Rose et al., 1971). Each neural fibre has a specific 'best frequency' where the sound needs to be very soft just to cause the neural fibre to fire more. This allows tonotopic representation and organisation. This depends largely upon the BM position of the IHC to which the afferent neuron is synapsing. Such tonotopic organisation is preserved throughout the ascending auditory pathway. However, frequency is not only encoded using the place code, it is also encoded using temporal codes under phase locking.

Describe evidence for facilitation and interference of cross-model attention

Driver and Spence (1998) demonstrated evidence of cross-modal attention facilitation where they asked participants to make elevation judgements. Cues in another modality were then presented on the left or right. They found that the presentation of an additional cue facilitated attention to that spatial location. Such an effect was present even when adding tactile cues. This effect could also be demonstrated in driving whereby cues that are presented in the same position allowed for an improved task performance, compared to when presenting at different locations (Spence & Read, 2003). However, interference effects have also been found using the irrelevant speech effect, whereby one can disrupt immediate memory for visually presented information through exposure of auditory information. Colle and Welsh (1976) made 2 groups recall visually presented information, either under a quiet condition or with foreign language played over earphones. They found greater recall in the quiet group compared to the irrelevant speech group.

Describe the role of the lateral intraparietal area (LIP)

Eye-Centred. One of the egocentric frames of reference in the brain is the eye-centred frame of reference where the object is coded relative to the centre of gaze. If the eye moves left, the object moves within this frame of reference. Such a representation was thought to be encoded by the neurons in the lateral intraparietal area (LIP). Duhamel et al. (1992) studied the visual responsiveness of LIP neurons of alert monkeys performing fixation and saccade tasks. During fixation, the receptive field borders were defined, and they found that stimuli presented outside these borders never activated the neuron. During the saccade task, the fixation target jumps while a visual stimulus appears. They found that when a stimulus is presented outside the receptive field, a saccade will shift the receptive field to the stimulus. In addition, the LIP neurons begin to fire even before the completion of the saccade, suggesting predictive ability of the LIP neurons. Therefore, the LIP neurons help to maintain a consistent representation of visual space as eyes move around. This is because the LIP neurons can shift the receptive field even before the eye moves, allowing a constant representation. Around a saccade, neurones in area LIP change the part of the retina from which they derive effective signals.

Explain how faces are encoded in face space

Face space is a heuristic (Valentine et al., 1991), where faces are coded along a set of dimensions that seek to reduce information and make use of population coding. Faces are arranged in space with the average face in the centre. Faces can be morphed continuously between different extremes passing through the average face. Face space is thus useful in understanding face adaptation, as adaptation shifts the identity in the opposite extremity (Leopold et al., 2001). Firing rates of neurons in IT can also be predicted using face space (Leopold et al., 2006). Firing rates changes depending on a shift towards the preferred face. It can also demonstrate the other-race-effect where other race faces are encoded using inappropriate features along the wrong dimensions, causing vector distance to be greater and outside our range of sensitivity.

Why facial processing is so difficult

Facial processing is difficult because of the 'many-to-one' mapping problem, where great variance exists across faces such as structural differences (face shape), surface differences (skin-tone, lighting), rigid (head rotations) and non-rigid transformations (expressions, speech). In spite of this, we are still able to process faces at an individual level with high efficiency, especially when compared to other visually homogenous object categories. We are even predisposed to look out for faces (e.g. Pareidolia), where it occurs even when there is vertical symmetry and a natural distribution of spatial frequencies (Paras & Webster, 2013)

Can gaze be dissociated from attention?

Gaze can be dissociated from attention using the spatial cuing paradigm. Posner et al. (1980) presented cues which point to a spatial location where targets might appear. However, they ensured that there is insufficient time for the eyes to move around the screen to look at the target, thus requiring attention to move. They found that responses are much quicker when cues are valid, suggesting that our attention can be oriented to the target without moving our eyes.

Give a brief description of the types of cells in the V1

Generally three types of cells present (Huber & Wiesel, 1959): simple cells, complex cells and hypercomplex cells Simple cells are orientation selective, and some are directionally selective. They are thought to be constructed by rows of on- and off-centre cells to produce bands of excitatory and inhibitory regions ; a bar detector. It leads to a Mexican hat, changing orientations leads to difference in response. Single cell detects information at preferred orientation and location Complex cells are thought to be derived from overlapping simple cells, creating an invariant bar detector which has no fixed inhibitory and excitatory regions. Orientation selective, some are directionally selective, no preferred location Hypercomplex cells are constructed from excitatory and inhibitory complex cells, concerned about the length of the stimulu

Summarise the evidence for and against Gibson's direct affordances theory.

Gibson's (1979) direct affordances theory states that the world is perceived in terms of its possible actions for the individual and actions are afforded based on an interaction between the visual attributes of an object and the observer's goal. Evidence for: Chao and Martin (1999) demonstrated that the AIP/F5 grasping circuit is activated when using, categorizing or viewing graspable objects, and that simply viewing graspable stimuli potentiates a motor response, suggesting an automatic processing of affordance. This could also potentially explain why young children have problems inhibited their pre-potentiated response in the end-state comfort task. Evidence against: Creem and Profitt (2001) demonstrated the role of semantic processing where they found that grasping the spoon for its typical goal is impaired when subjects do a semantic word-pair learning distraction task. This suggests that semantic processing plays some role in processing typical affordances of an object. However, this could be confounded with the observer's goal, which might simply be to pass the spoon rather than use it themselves, which leads to different grasps Ultimately, it seems like some partial processing is still involved. Also, there are alternative theories like the dual-stream hypothesis.

Is there such a thing as innate face processing?

Goren et al. (1975) found that new-borns will turn further to look at face-like patterns than scrambled versions of the same images, suggesting a very early separation of face recognition abilities from general object recognition. This suggests that face processing is innate, and is evidence for the domain specificity hypothesis. However, Simion et al. (2002) found that it might be a more general bias for certain patterns (e.g. top heavy patterns). Turati et al. (2002) demonstrated that this effect persisted even with face-like stimuli, although it disappears at around 3 months. Evidence, however, has been found for innate mother detection. Field et al. (1984) demonstrated that babies aged 1-4 days looked longer at the mother's face as compared to a stranger's, using a preferential looking paradigm. This is suggestive of some specialised face recognition at birth.

Describe the role of the ventral intraparietal area (VIP)

Head-Centred. Another egocentric frame of reference in the brain is the head-centred frame of reference where the object is coded relative to the individual's head. For example, during hearing whereby the ears are in a fixed position to the head and cannot move around (head must turn for ears to turn). Such a representation was thought to be encoded by the neurons in the ventral intraparietal area (VIP). Stimuli conveying motion information have been demonstrated to activate VIP neurons, where Colby et al. (1993) recorded from single neurons in VIP of alert monkeys and studied their visual and oculomotor response properties following the presentation of moving stimulus with different directions and trajectories. They identified that the locations of these stimuli are frequently represented in head-centred coordinates, as the stimulus triggering the strongest response was one moving toward a specific point on the face from any direction. This means that when the position of the eyes is changed, the receptive fields of VIP neurons maintain the representation of a certain spatial location. Interestingly, most receptive fields for somatosensory stimuli are restricted to the head and the face, and match those for visual information in location, size and stimulus direction (Duhamel et al. 1998). For example, a typical VIP neuron may discharge when a moving visual stimulus is detected in the upper left visual quadrant, but also when the left eyebrow of the animal is touched. Likewise, the preferred direction of VIP neurons for visual and vestibular stimulation (horizontal rotation of the animal on a turntable) is congruent, i.e. the neurons are preferentially responsive to visual motion and head rotation in the same direction (Bremmer et al. 2002). Therefore, the functions of VIP may encompass the perception of self-movements and object movements in near extra-personal space (Bremmer et al. 2002). For example, when an animal is heading for fruits, visual (approaching branches of trees), tactile (touching leaves) and vestibular information (perception of head movement) may guide the monkey to move through the forest.

Describe the colour opponent theory

Hering first proposed the colour opponent theory by pointing out certain colours that were mutually exclusive e.g. red with green, yellow with blue - hard to imagine reddish-green. He suggested that there were three different types of systems that made use of our three colour cones, where cone photoreceptors are linked together to form three opposing colour pairs: blue/yellow, red/green, and black/white (luminance). Luminance - derived by combining the input of red and green cones Red-Green - derived by dividing the input of red cones with green cones Yellow-Blue - derived by dividing the luminance output with the blue cones Can be observed in colour after-effects e.g. lilac chaser Snowded (2012) proposed two main benefits of the theory. Firstly, as intensity of a light increases, the responses of all cones increases, but the colour ratios stay the same. This means the colour stays constant. However, the luminance signal, which is the sum of red and green cones, increases as the light intensity increases. Second, as we change the wavelength of a light, the ratio of the two-colour responses varies, but the intensity signal is relatively constant.

What is the Herman Grid Illusion, and how can we explain their occurrence?

Herman grid illusion is one where black/dark spots are visible at the intersections of grid lines but not between two black squares Explanation by lateral inhibition theory (Baumgartner, 1960): assuming ON-centre cells, cells at the intersection will be more inhibited as compared to those between black squares, leads to black spots. Effect more prominent at the periphery regions, where RF are bigger, leads to greater inhibition compared to at the foveal of vision Counter studies: Geier et al. (2004): use of sinusoidal functions removed effect even though based on the lateral inhibition theory, the ON-centre cells should still receive the same amount of inhibition as before. Schiller (2005) use of grid pattern to enhance inhibition of cells by allowing maximal inhibition at the intersection, however this did not increase the effect and lead to darker spots. Alternative explanations have thus emerged: e.g. principle role is assigned to the orientation selective S1 cells

Evaluate evidence against the FIT

However, studies investigating visual search have found that attention might not be required for all conjunction searches. Driver and McLeod (1992) found that parallel searches could occur for conjunctions of motion and orientation. This suggests that motion might be processed more efficiently, since they tend to affect survival. Nakayama and Silverman (1986) also found that when conjunction searches are split into depth layers, parallel searches can occur without attention. Behrmann et al. (2003) also studied patients with unilateral hemispheric infarcts to left/right hemisphere on feature and conjunction search tasks, and found that patients with brain damage with or without neglect were impaired at searching for contralateral targets on both forms of visual search, although the FIT predicts that attention is not required in feature searches.

Describe how do we resolve the correspondence problem

In its traditional form, the correspondence problem referred to the difficult situation faced by the brain when it has to 'decide' which stimulus in one eye should be matched with which stimulus in the other eye. However, while it was originally framed as a purely unimodal visual problem, researchers have recently come to realize that (in complex real-world scenes) the brain also faces a crossmodal version of the correspondence problem (Fujisaki & Nishida, 2007): How, for example, in a cluttered everyday, multisensory scene, does the brain know which visual, auditory, and tactile stimuli to bind into unified multisensory perceptual events and which to keep separate? Temporal proximity: cue information that is received in close time are often paired together Spatial proximity: cue information that is received in close space are often paired together For example, the rubber hand illusion (Ehrsson et al., 2004) occurs because we integrate cues of sight, touch and proprioception to create a convincing feeling of body ownership, even though the hand we see being stroked is a rubber one. However, even if there is spatial misalignment of visual and haptic cues, adults still integrate vision and touch (Helbig & Ernst, 2007). General innate biases: humans of all cultures and ages, including pre-verbal toddlers, think that the spiky shape is Kiki and the round shape is Bouba (Spence, 2011)

Describe the phenomenon of phase locking in the auditory system

Information about stimulus frequency is not only coded by which nerve fibres are active (place code), but also by when the fibres fire (temporal code). However, this only applies to low-frequency sounds, where auditory nerves fire at specific waveform times. Nerve firings are found to be synchronised to the peaks of sound waves. This occurs because the vibration of the BM shifts the IHC to one side, which causes firing in the auditory nerve. Hence, the movement of the BM can also determine when the auditory nerve fires. Firing at the peak implies that it fires when the BM moves up i.e. when the IHC is shifted such that an action potential is fired Synchrony of neural firing is strong up to about 1-2 kHz. However, this degree of synchrony decreases steadily over the mid-frequency range and there is no longer evidence of synchrony above 5kHz. This suggests that the determination of pitch of low frequency sounds might differ from higher frequency sounds, which make use of a place code. This is because the higher the frequency, the faster the BM will have to vibrate, and fire faster action potentials.

Describe evidence for the holistic processing of faces

Inversion effects of faces but not objects has been taken as evidence that objects are processed in a part-based fashion, whereas faces are processed configurally or holistically. Le Grand et al. (2001) found that feature changes (e.g. eyes, lips) are easy to detect with inversion, but configural changes (e.g. eye spacing) are strongly impaired by inversion. This suggests that inversion disrupts the configural processing of faces. Further evidence can be taken from the part-whole effect. Tanaka and Farah (1993) showed that if subjects trained on intact faces, recognition was better for whole faces than isolated features. On the other hand, if they were trained on scrambled faces, they were better at recognising isolated features than intact faces. This shows that configural processes help to boost the learning of faces while scrambled faces are learnt in a part-based fashion, similar to objects. Young et al. (1987) demonstrated that configural processing is difficult to avoid through the composite illusion, where changes are easier to detect when the faces are inverted. The Thatcher illusion (Thompson, 1980) also shows rotations of facial features become obvious and grotesque with upright faces as they disrupt our holistic processing.

Explain the neural basis of motion perception

It is believed that the neural basis of motion perception lies in V5/MT, which is part of the dorsal stream. All cells in the V5 appear to be directionally selective, where they appear to have a column structure in which cells group together have similar preferred direction of motion (Albright, 1984) Electrical stimulation of MT cells can bias motion perception in the direction preferred by the stimulated cell (Salzman & Newsome, 1994). It was also found that the MT seems to be active while people experience motion after-effects (Tootell et al., 1995) Further evidence can be found in people with motion blindness e.g. Patient LM (Zihl et al., 1983). She had bilateral lesions caused by blood clots. Visual acuity was preserved but motion perception was absent (e.g. unable to pour tea). This demonstrates that motion is a separate entity (primacy of motion).

Role of different senses in calibrating different perceptions

It is hypothesized that visual senses are responsible for calibrating spatial perceptions (e.g. orientation), while touch is responsible for calibrating size perceptions and auditory senses are responsible for calibrating temporal perceptions. Gori et al. (2012) studied blind children and made them do touch-based judgements as well as motor impaired children doing visual judgements. He found that motor impaired children performed poorly when judging size but visually impaired children performed poorly in judging orientation. Furthermore, Kolarik et al. (2016) found that blind adults are less sensitive in auditory space perception but not time perception.

What are the properties of the LGN cells?

LGN cells receive input from both eyes, from about the ipsilateral half of the visual field. They generally form two layers: the parvocellular layer and the magnocellular layer The parvocellular layer receives input from midget cells, while the magnocellular layer receives from the parasol cells. There are four magnocellular layers, and two parvocellular layers (one layer for each eye). It forms a retinotopic map with 6 layers, although it only receives about 10% of its input from the retina Parvocellular cells are sensitive to colour, have high spatial resolution, poor temporal resolution Magnocellular cells are insensitive to colour, have poor spatial resolution, excellent temporal resolution. Schiller et al (1990) lesioned either the M or P layer of monkeys, found that P lesions impacted colour discrimination, while M lesions impacted motion discrimination LGN receptive fields resemble that of ganglion cells, contain a centre-surround organisation with excitatory and inhibitory regions. Magno cells are big, spatially opponent and monochromatic Parvo cells are small, spatially and colour opponent

Describe how the LSO can localise sound along the azimuth

LSO neurons measure the interaural level difference (ILD) which is the difference in sound intensity between the ears. Sound is louder at the ear nearer to the source as the head blocks the sound waves and cast an acoustic shadow. LSO neurons receive excitatory inputs from the ipsilateral ear, and inhibitory inputs from the contralateral ear. They then try to compute the difference in amplitude between the inputs coming from both ears. LSO neurons will compute the net difference between excitatory and inhibitory inputs to compare which ear the sound is louder. Thus, LSO responses depend on the ILD and is strongest to sounds that are louder in the ipsilateral ear.

Compare and contrast the MSO and LSO

MSO neurons make use of interaural time differences, while LSO neurons make use of interaural level differences. There is an over-representation of low-frequency sounds in the MSO, while the same occurs for high frequency sounds in the LSO (probably because lower frequencies go around the head unblocked). Both neurons help to localise sound along the azimuthal axis, and are complementary mechanisms

Describe how the MSO can localise sound along the azimuth

MSO neurons measure interaural time differences (ITD) which are auditory delays between sounds reaching the left and right ear. The minimum audible angle has been estimated to be about 1 degree, where humans can detect ITD of 10 microseconds MSO neurons receive excitatory inputs from both ears and measure the difference in timings of inputs coming from them - different MSO neurons are activated when sound comes from different places. It makes use of delays in neuronal transmission and axon length. MSO neurons only fire if there is simultaneous excitation from both ears. The place of an MSO neuron determines the ITD to which the neuron is tuned. The output is thus a map of sound-source location along the azimuth.

What are Mach bands, and how can we explain their occurrence?

Mach bands are adjacent bands of grey, increasing brightness from left to right. When we perceive it, left hand side of the band looks lighter, while the right hand side looks darker. Assuming ON-centre cells, which are cells who have a central excitatory region surrounded by an inhibitory region. D receives greater inhibition compared to A, thus appears darker. C receives less inhibition compared to B, thus appears brighter [add drawing] These might provide us with information about the possible neural mechanisms underlying edge detection. Can be useful in computer vision who require an edge detection algorithm.

Evaluate Marr's structural description model

Marr and Nishihara (1978) introduced the idea of part-based structural representations based on three-dimensional volumes and their spatial relations. In particular, they proposed that object parts come to be mentally represented as generalized cones (or cylinders) and objects as hierarchically organized structural models relating the spatial positions of parts to one another. They believed that any system should use an object-centred coordinate system. The significance of this assumption is that the same generalized cones can be recovered from the image regardless of the orientation of the object generating that image. This solves the problem of many-to-one mapping. However, generalised cones are difficult to derive from real images

Describe the advantages and disadvantages of the methods of limits, adjustment and constant stimuli

Method of Limits: The intensity is increased or decreased until the response changes Errors of habituation: giving the same response continually and don't change Errors of anticipation: know the threshold is coming and change responses too soon Minimizing errors: Approach in both directions, threshold is average of these measurements. Performance measure: average setting = brightness threshold Method of Adjustment: The observer changes the stimulus levels themselves until report changes, allowing rapid estimation of the threshold Method of Constant Stimuli: Intensities are presented in a random order, with repeats This avoids issues of habituation/anticipation but there is a slower estimate of threshold, need to test pre-determined range of intensity values Good practice: pilot testing with method of limits/adjustment -> use this to select parameters

Describe evidence for early selection of visual attention

Neisser and Becklen (1975) demonstrated evidence for early selection when they presented participants with a video containing two different superimposed scenes. One had people playing basketball, while the other had people playing a hand-clapping game. Participants were asked to focus on one scene and ignore the other. They found that most of the ignored video clip could not be reported, where peculiar events, such as male basketball players being replaced by women or the hand players shaking hands, went unnoticed. Further evidence can be demonstrated through inattentional and change blindness. Simons and Chabris (1999) presented participants with a video of people playing basketball and told them to count the number of passes. They found that people, while attending to the task, failed to notice the appearance of a gorilla (inattentional blindness) or even the curtain changing colour (change blindness). Evidence was also found with indirect measures. Raymond et al. (1992) used rapid serial visual presentation (RSVP) to prevent the element of memory as the stimuli is presented too quickly to be remembered. The task was to identify the letter presented in white, as well as to click the mouse when an X is seen. They found that when the probe and target appeared closely, it was easily detected, yet when it appears later, it was not so easily detected. This suggests that there was little processing of information from a different channel at the same time.

What does the responses of the Inferotemporal (IT) Cortex tell us about face processing?

Neurons in macaque IT respond to faces, where such responses are attenuated by jumbled features, occlusion or changes in viewpoint (Desimone et al., 1984). Specialised brain regions were also found in humans in the fusiform gyrus, known as the Fusiform Face Area (FFA). Kanwisher et al. (1997) demonstrated this with fMRIs that reveal a greater BOLD response in the FFA following the presentation of faces as compared to other objects. This demonstrates a specialised brain region for facial recognition and provides support for the domain specificity hypothesis. Furthermore, evidence has found that FFA activity, while modulated by other objects, is strongest for faces (Grill-Spector et al., 2004). FFA stimulation also distorted patients' perception of faces but not of other objects in the room (Parvizi et al., 2012), suggesting specificity of the FFA.

Know some of the processing properties along the ventral stream hierarchy as discussed in lecture and required reading

Object processing is thought to occur under the ventral stream ("what") as proposed by Ungerleider and Mishkin (1982). Early processing of object properties occur in V1/V2, which have been found to be sensitive to edge orientations and are space dependent (Felleman & Van Essen, 1991). V4 cells display preference for certain curvatures and orientations of that curvature, demonstrating increasing specificity of information processing. The information is then thought to be passed on to the Inferotemporal Cortex. Object agnosia also demonstrates that impairment of recognition can be specific to objects, and is not due to a deficit in vision, language, memory or low intellect. Suggests specialised brain regions. However, Haxby (2002) showed that object information is distributed throughout the whole cortex, not in category selective nodes. Gross et al. (1972) reported that neurons in the inferotemporal (IT) cortex of macaques responded most strongly to complex visual stimuli, such as hands and faces. fMRI reveals that viewing objects elicits highly consistent category-specific activations (Chao et al., 1999). It was also demonstrated that there was some top-down influences on knowledge, when viewing an ambiguous stimuli, focusing on the face results in different activations from focus of object.

Why is object recognition so difficult?

Object recognition involves not only just one task, it also involves categorisation, segmentation, recognition and classification. Evidence for this difficulty has been demonstrated in object permanence, where children under 8 months do not perceive objects as stable entities when they are occluded (Piaget, 1963) Object processing is difficult due to the "many-to-one" mapping problem, where an object can be viewed in an infinite number of ways and thus generate infinite amount of images onto the retina. In addition, differences in lighting, pose, position and distance can also vary, yet we are able to recognise the specific instance of an object. Furthermore, degrading the image through occlusion, distortion, noise, filtering, all can affect the amount of information that we receive about the object and the projection onto the retina, yet we have to identify some degree of constancy to recognise the image. Bruce et al. (2003) describes that problem of stimulus equivalence: if the stimulus controlling behaviour is a pattern of light or image that falls on the retina, then there are an infinite amount of stimulus that can produce similar effects, and different from other sets of images.

Describe how the method of constant stimuli can be used to assess appearance

Objective method of constant stimuli approach also determines stimulus appearance and performance Clearest if we examine orientation discrimination: 2AFC discrimination task relative to a vertical reference point "Is the Gabor oriented clockwise or counter-clockwise of vertical?" Interesting particularly when our perception of vertical is altered Could plot as percent correct (like before), but if plot as percent counter-clockwise (CCW) responses instead, the midpoint now tells us something different To measures the threshold: Take the difference in orientation required to get to 75% CCW Here a rotation of 3.5 degrees is required to judge a CCW rotation. Changes in PSE: If we change the way things appear, then the PSE will shift

Evaluate evidence supporting FIT

One evidence supporting FIT is Illusory conjunctions. Treisman and Schmidt (1982) made participants remembers numbers at two ends of a five-symbol string, and report as much as they can about the 3 report letters and colours of those letters in the middle. They found that there were more conjunction errors, where participants incorrectly combined the letter and colours of the target, than feature errors, where participants reported an incorrect colour or letter. This was evidence for the FIT, as attention was consumed by the primary digit task, causing features to be left freely floating and not combined. Thus, during the recalls stage, these features were incorrectly bound together. Corbetta et al. (1991;1995) demonstrated further evidence for the FIT using Positron Emission Tomography (PET). He found that selective attention to shape, colour or speed activates brain regions related to perception of colour (V4), motion (V5) and shape (IT). By studying the activity of the parietal lobe in a variety of visual search tasks involving colour and motion, he found that conjunction searchers (colour and motion) but not feature search (colour/motion) activates the parietal lobe, demonstrating that regions are only activated when attention is paid.

Explain the parts and the functions of the outer ear

One of the key parts of the outer ear is the pinna, which is the visible part of the ear that resides outside of head. It funnels sound waves into the auditory canal, and the grooves and ridges creates a spectral shape to the sound allowing the determination of both the elevation of the sound source and its origination in front of or behind the head. Batteau (1967) described this in the Pinna Filtering Effect Theory, which states how the pinna can help localise sound along the vertical axis due to its asymmetry (elevation). The next part is the ear canal, which is a tube running form the outer ear to the inner ear and is closed at one end and open at the other. The ear canal acts as a simple resonator, allowing more resonances at higher frequency. Its function can be extended through the cupping of ears, which is known to help with listening (Weiner & Ross, 1946)

Give an overview of pictorial cues used in human depth perception

Pictorial cues are monocular, static cues. There are generally 4 types of pictorial cues used in human depth perception, where they give a good depth percept in natural scenes, and are largely based on heuristics. They also work on the assumptions of the world that disambiguate retinal input e.g. light comes from above. Relative height and size - retinal image size is proportional to object size and is inversely proportional to distance to object Occlusion - provides cue to depth order where a T junction provides information about which surface is in front Shading and shadows - shadows give information about what is above and the relative elevation of an object Atmospheric/aerial perspective - Distant surfaces appear hazier and blue due to light scatter by the atmosphere Cues can be also be combined by placing the cues in conflict and weighing them by reliability

Describe the plasticity of the auditory cortex

Plasticity of the auditory cortex is experience-dependent, whereby owl monkeys trained to discriminate between two frequencies close to 2500Hz have enlarged representation of the target frequencies (Recanzone et al., 1993). This demonstrates how neuronal representations can change to help the monkey cope better with the task. Furthermore, Fritz et al. (2003) have shown that when a ferret is trained to respond to a specific tone in a sequence, its individual neurons can become tuned to the target frequency. Such effects have also been found in humans, where musicians have more of the auditory core responding to piano tones and generating stronger neural responses.

Describe the columnar structure present in the V1

Presence of ocular dominance columns of cells that respond preferentially to input from one eye. Input from left or right eye mapped onto different layers of the LGN, and therefore visual cortex. This accounts for blindspot where information from the other eye dominates Down an ocular dominance column, neurons with same orientation preference. Moving across the cortex, there are cells with different orientation preferences, demonstrating the existence of orientation columns V1 cells sensitive to spatial frequency, colour, direction, binocular disparity etc. for example, chemical staining of the cortex has revealed colour -selective cells organised in columns which show up as blobs. V1 is thus organised into hypercolumns composed of repetitive columns, each containing one left and right eye dominance column, and a full set of orientation and colour specificities

Explain the prevalence of colour blindness

Prevalence of colour blindness can be explained by the possible advantages conferred upon colour blind individuals. For example, Bosten et al. (2005) found that individuals with deuteranomaly can spot the difference between two shades of khaki (shift in processing of green light towards brown?). Also, Morgan et al. (1992) found that dichromats are able to break certain types of camouflage more easily, perhaps due to the fact that they only have to process along a single dimension (orientation), as they cannot distinguish between red and green (which normal individuals have to process in a serial manner). This might allow them to spot certain prey more easily when they are shrouded by camouflag

Explain what is the Theory of Structuralism

Proposed by Edward Tichener, deconstructing experience into component sensations. He emphasized the relations between sensations (grouping), and stated that the whole is greater than the sum of its parts Explained by the Gestalt Principles: Proximity and Similarity - items which were close to each other and those which were similar Closure - when an object is incomplete or a space is not completely enclosed. If enough of the shape is indicated, people perceive the whole by filling in the missing information Continuation - tend to continue shapes beyond their ending points". The edge of one shape will continue into the space and meet up with other shapes or the edge of the picture plane. Common Fate - humans perceive visual elements that move in the same speed and/or direction as parts of a single stimulus Pragnanz - People will perceive and interpret ambiguous or complex images as the simplest form(s) possible Proposed grouping principles that constrain perceptual solutions

What does prosopagnosia tell us about face processing?

Prosopagnosia is the failure to distinguish between faces, despite normal visual acuity and cognitive ability. Prosopagnosia can be acquired after damage to inferior occipital regions and fusiform gyrus. However, it often coincides with general object agnosia, where one is able to see the components of objects but lack the ability to integrate them together, which is consistent with the expertise hypothesis. However, prosopagnosia can also be developmental and occur because of prenatal injuries (or hereditary cases). It appears to only affect facial processing. Schmalzl et al. (2008) demonstrated 4 generations of one family who scored <60% for familiar face recognition without external cues despite otherwise normal vision.

Compare and contrast the types of ganglion cells

Retinal ganglion cells collect information from photoreceptors to be sent to the brain. They have an impressive data compression and each ganglion cell has a circular receptive field. There are generally two types of ganglion cells, and both cells have centre-surround organisation, with regions of excitatory and inhibitory effects - cell can thus either be an ON-centre cell, where centre regions are excitatory, or OFF-centre cell, where centre regions are inhibitory. They are thus good at detecting changes in light, rather than absolute values Generally two types: midget and parasol Midget cells show strong colour selectivity, while parasol cells show poor colour selectivity. Midget cells have small receptive fields, while parasol cells have large receptive fields. Midget cells demonstrate slow, sustained responses while parasol cells demonstrate rapid, transient responses. Midget cells have high spatial resolution, while parasol cells have low spatial resolution

What does the other-race-effect tell us about face processing?

Rhodes et al. (1989) found that European subjects are far worse at recognising Chinese faces, with little to no inversion effects, and vice versa for Chinese subjects. In essence, we are experts at recognising faces of our own race (other-race-effect). Shepherd et al. (1974) demonstrated the generalisability of the effect when he replicated it amongst Caucasian and African subjects. This is clear support for the expertise hypothesis, where increased exposure and expertise helps to hone the facial recognition system. However, perhaps this can still fit within the idea of a dedicated face-recognition system, where expertise is used to refine and improve the specialised facial recognition system.

What are the differences between rods and cones cells?

Rods contain rhodopsin while cones contain red-sensitive, blue-sensitive or green-sensitive opsin. Rods are found predominantly in the periphery, while cones are concentrated in the fovea. Rods allow scotopic vision, while cones allow photopic vision Rods thus play a critical role in dark adaptation. During the day, rhodopsin is bleached. Thus when one enters a dark environment, it takes about 20-30 minutes for the rhodopsin to replenish, and reach full scotopic sensitivity. On the other hand, cones play a critical role in colour perception, as each type of cone is sensitive to different wavelength of lights. Together, they give us information about the intensity and wavelength of any stimulus we perceive.

How may cochlea implants be used together with hearing aids?

Significant residual hearing is found in about 50% of adult CI candidates (UK CI Study Group, 2004), thus suggestions have been made to combine hearing aids with CI. Sucher et al. (2009) demonstrated that 9 adults that use a CI + HA show significantly better performance at melody identification compared to people who simply had CI.

Describe the evidence that there are regions in the brain that integrate multisensory cue information

Super additive responses of neurons have been found in the superior colliculus of cats, where neurons fire more frequently when both auditory and visual cues are presented, than when either cue is shown alone. This effect has been demonstrated even in humans, whereby there is greater response to audio and visual stimulus in an area known as the superior temporal sulcus. However, as this information was obtained using MRI which collapses across many neurons, there is a confound between whether the enhanced response reflect integrating neurons or activating more single cue neurons. Ban et al. (2012) used a maximum likelihood estimation approach for fMRI that uses cue conflict to disentangle whether an area integrates cues or keeps them separate. He made use of binocular disparity and relative motion cues to depth. He found that even though one cue was sufficient to judge depth, conflict of cues led to specific regions being activated, suggesting integration in the dorsal stream

Describe synaesthesia and the effects

Synaesthesia is where stimulation of one sensory or cognitive pathway leads to automatic, involuntary experiences in a second sensory or cognitive pathway. There is a consistent experience of cross-modal 'contamination'. For example, Stroop effects for synesthetic associations have been found (Mattingley et al., 2001), and in common colour Stroop tasks, stronger activations in colour area V4 for synesthetes have been found (Rouw et al., 2011). A further study by Rich et al., (2005) however found that there was consistency in colour/letter or number associations between synesthetes and non-synesthetes (e.g. Y for yellow). This suggests the role of early experience or even general preferences for correspondence.

Describe the feature integration theory

The FIT (Treisman & Gelade, 1980) holds that attention is critical to the formation of bound representations of objects and, by extension, it proposes that attention is critical to our conscious experience of those bound representations. In FIT, the visual system decomposes the visual scene into its composite features, arrayed in a set of 'feature maps.' This was captured in the binding problem, whereby a study by Zeki (1976) show how there were specialised populations of neurons corresponding to separate features in the visual world (e.g. colour coded by V4, motion coded by V5). However, we can still perceive coherent objects, which the FIT states that attention helps to bind all these different features together.

Describe and evaluate the maximum likelihood model of multisensory integration

The Maximum Likelihood Model proposes that to determine the multisensory percept, cues are weighted by their reliability, thus helping to maximise precision and robustness of the resulting estimate through the combination of individual signals. Cues in the world are inherently imperfect due to noisy neuronal signally or ambiguity, leading to variances in their reliability. The width of the estimate distribution reflects cue reliability, where higher variance equals low reliability. Ernst & Banks (2002) used virtual reality to conflict visual and haptic information, varying visual reliability by adding noise pixels to the image. They found that if visual information was reliable, the size of the conflict block was judged closest to its seen size, giving weight to vision cues rather than haptic ones.

Describe evidence for late selection of vision

The Stroop (1935) task is evidence for late selection, as words which participants were told to ignore were still perceived. However, some has criticised this as the irrelevant dimension was still part of the attended object. Gatti and Egeth (1978) thus separated words from colour, but was still able to demonstrate Stroop interference. Eriksen and Eriksen (1974) demonstrated that incompatible distractors resulted in slower reaction times of identifying the target compared to compatible distractors. Such response competition effects showed that the identity of distractors are perceived, and thus provides support for late selection.

Describe the functions of the auditory cortex and its organisation

The auditory cortex is responsible for segregating and recognising auditory objects, performing further analyses on auditory information and deconstructing complex sound patterns (e.g. voices, speech, music). It also allows you to pay attention to sound in the event of background noise and learn about sound (plasticity). It is also the point where an individual has conscious perception of sound. Damage to the auditory cortex can lead to cortical/central deafness, or even auditory agnosia. The auditory cortex contains the presence of a sound-processing hierarchy, from the core to the belt to the parabelt, and subsequently higher auditory areas. It contains tonotopically organised auditory fields in the auditory core which is responsible for narrow frequency tuning, pure tones and simple sounds, as well as in the auditory belt which is responsible for broader frequency tuning, band-passed noise and complex sounds. This tonotopy is lost when it reaches the parabelt.

Describe the parts of the central auditory pathway

The central auditory pathway receives information from the auditory nerve leading from the cochlea. It then reaches the cochlea nucleus, which is responsible for decoding intensity, analysing temporal parameters, as well as completing and transmitting the frequency analysis that was carried out in the cochlea. The information from the ventral nucleus is later transmitted to the superior olive, while the information from the dorsal nucleus bypasses it and goes straight to the lateral lemniscus. The superior olive contains multiple nuclei to integrate auditory information, and then sends the filtered information to other neural structures. It is mainly responsible for localization of sound along the azimuthal axis, and contains the medial superior olive and the lateral superior olive, making use of interaural time difference and interaural level differences respectively to localise sound. The lateral lemniscus is primarily to integrate input from the ventral and dorsal CN, before passing it to the inferior colliculus (midbrain). Here, information is integrated for sound feature extraction (e.g. sound localisation), where neurons are also sensitive to ITD and ILD. It then passes information to the medial geniculate body (thalamus), which also receives somatosensory and visual inputs, before passing it on to the primary auditory cortex.

Summarise the mechanism of a cochlea implant

The cochlea implant mainly acts as a substitute for faulty or missing inner hair cells by direct electrical stimulation of residual auditory nerve fibres. As a speech processor, its essential features are to transduce the acoustic signal into an electrical form, process it in various ways, and to convert the resulting electrical signal into a form appropriate for the stimulation of the auditory nerve. It also serves the function of compression to address the limited dynamic range of electro-cochlea stimulation. Sound is received by the microphone of the speech processor. The sound is digitized, analysed and transformed into coded signals. Coded signals are sent to the transmitter. The transmitter sends the code across the skin to the internal implant where it is converted to electric signals. Electric signals are sent to the electrode array to stimulate the residual auditory nerve fibres in the cochlea. Signals travel to the brain, carrying information about sound.

Describe the organisation of the cochlea nucleus

The cochlea nucleus (CN) preserves tonotopy, whereby the lower frequency sound information is sent to the ventral regions, while information on higher frequency sounds are sent to the dorsal regions. It contains two cranial nerve nuclei, the ventral CN (VCN) and the dorsal CN (DCN). The VCN is fast and temporally precise. It contains bushy cells, which fires one AP at sound onset, as well as stellate cells, which fire regularly spaced train of AP for the duration of the sound. These helps to pass information about intensity and time intervals to the superior olive. The DCN has high temporal precision and carries out complex spectral analysis. It contains fusiform cells which are responsible for broad frequency tuning as well as Octopus cells which are responsible for the onset response. The DCN projects pass the superior olive, directly to the lateral lemniscus

Summarise the evidence for and against Milner and Goodale Dual Stream Theory.

The dual stream theory by Milner and Goodale (1992) emerged due to the difference in requirements for of vision for action and perception: action requires input from an egocentric frame of reference while perception requires input from multiple frames of reference (e.g. viewer-based, scene-based). Neurology evidence: Patient DF (Goodale et al., 1994) had brain damaged due to CO poisoning, and result in object agnosia where he was unable to recognise complex objects. He failed to correctly orient a card when asked to match it to a slot, but had no orienting problems when he was asked to post the same card. This suggests an impaired ventral stream ('what') but intact dorsal stream ('how'). Also, there are patients with contrast optic ataxia, which have intact object recognition, but have difficulty grasping them based on these properties. Behavioural evidence: Aglioti et al. (1995) found that grasp aperture is unaffected by visual illusions (e.g. Ebbinghaus illusion), suggesting vision for perception relies of different visual analyses than vision for action. However, this was only true if visual feedback is provided during the grasp, suggesting the planning stage is sensitive to illusion (Bruno & Franz, 2009).

Describe evidence for early selection model (Broadbent, 1958)

The early selection model (Broadbent, 1958) contains a structure with sensory register, filter, area for semantic analysis and then response selection. Its handling of attended compared to unattended information differs in the early stages of perceptual processing. When an input is first registered by the senses, the sensory buffer holds the information for a short amount of time before they are filtered out based on their physical characteristics. Attended information is allowed to undergo semantic analysis which ultimately determines our responses and experiences. Evidence for early selection came from Cherry (1953) using a dichotic listening task. She presented two messages through headphones witch channels of information defined by the artefact of the source. Attention was manipulated to one channel through 'shadowing', where participants had to repeated aloud words from the attended channel. She found that while some unattended information could be recognised (e.g. gender, human speech vs. tone), other information such as language switch, or speech played backwards often goes unnoticed. Moray (1959) found that even when repeated 35 times, content in the unattended stream was still not recognised

Describe the role of the anterior intraparietal area (AIP)

The final egocentric frame of reference discussed in the present essay is the grasp-centred frame of reference, where objects in space are encoded relative to its grasping properties and shape. Such a representation was thought to be encoded by the neurons in the anterior intraparietal area (AIP), which contains neuro-circuitry suitable for converting object shape into appropriate grasp response AIP neurons are active during fixation and manipulation of objects. These neurons are highly responsive to size, shape and orientation of objects, sometimes even highly selective in their response to the presentation of a specific object geometry, e.g. selective responses for plates but not for cylinders (Murata et al. 2000). AIP neurons can be subdivided into three groups according to their visuomotor discharge properties (Sakata et al. 1995). 'Motor dominant neurons' fire to fairly similar degrees during object manipulation in the light and in the dark. 'Visual dominant neurons' discharge during the manipulation of objects in the light but not in the dark. The 'visual-and-motor neurons' show an intermediate behaviour with less activation during object manipulation in the dark than in the light. Anatomically, area AIP is connected to the ventral premotor cortex, especially to motor area F5, the neurons of which also discharge during specific object-related hand movements and even only on presentation of a 3D object without subsequent manipulation (Murata et al. 1997). It seems that area AIP in combination with area F5 transforms 3D properties of an object into appropriate finger formations and hand orientation for visually guided grasping movements (Murata et al. 2000).

Describe the guided search theory

The guided search theory (Wolfe et al., 1989) states that salient features can form a perceptual group, allowing attention to be restricted to each group and thus allowing search to proceed in parallel within a group. He demonstrated that despite increasing set size from 15 to 30 for targets containing black triangles and white circles, where participants had to search for a black circle, participants did not show significant increases in RT, suggesting that searches proceeded in parallel even for conjunction searches

Describe the inhibition theory of selection attention

The inhibition theory suggests that ignored information are still perceived but responses towards them are inhibited. This was demonstrated through negative priming effects. Tipper and Cranston (1985) asked participants to read letters in blue aloud while ignoring red letters that were superimposed on top of each other. They found that when the target letter was the ignored letter on the previous trial, reaction times increased. Such inhibition was caused because participants had to re-attend to the letter which they previously ignored. Stronger evidence is provided by findings of negative priming in the flanker task (Tipper et al., 1988). The flanker which participants had to ignore became the target on a subsequent trial and they found that this resulted in slower reaction times compared to the control condition

Describe evidence for late selection (Deutsch & Deutsch, 1963) of auditory attention

The late selection model (Deutsch & Deutsch, 1963) states that recognition of familiar objects proceeds unselectively and without capacity limitation. Thus, the information is first semantically analysed, before the relevance of the information is used to assess if attention will be paid to it or not. It then later affects our responses. Its main criticism is that early selection studies measured memory rather than perception, it thus believes that unattended perception = attended perception. Cocktail party effect (Moray, 1959): early selection model was unable to explain why certain things in the unattended channel can be heard. He described the cocktail party effect, where one can tune in to one conversation in front of you, yet still able to detect words of importance in the un-tuned stream (e.g. your name). He found that when participant's name was added to ask them to stop the experiment in the unattended stream, the number who noticed increased from 6% to 33%. Such as effect was so powerful that individuals who were asleep are often awakened by their own name (Oswald et al., 1960). Gray and Wedderburn (1960) also found that semantic meaning can define a channel, suggesting processing of unattended information. Using the dichotic listening task, they found that subjects often grouped items into meaningful phrases (e.g. Dear Aunt Jane), even though the words were presented in different channels. Unattended semantic information can even disambiguate content in the attended stream (MacKay, 1973).

Explain the function of the middle ear

The middle ear is the part of the ear between the tympanic membrane (ear drum) and the oval window. It helps to transfer pressure from a big area (ear drum) to a smaller area (oval window) through the vibration of the ossicles. This allows transfer of movement to the fluid filled cochlea without significant loss of energy through increasing the pressure on the oval window (around 20 times that of the ear drum). This is to solve the problem of impedance mismatch, which occurs because sound as to travel from one medium (air) to another (water), and would result in significant losses of energy. The middle ear is also connected to the Eustachian tube, which acts as a pressure release valve to adjust the middle ear pressure. This is important because the proper function of the middle ear depends on the presence of a mobile tympanic membrane capable of vibrating in response to a sound wave. For the tympanic membrane to have maximal mobility, the air pressure within the middle ear must equal that of the external environment.

Explain the motion after-effect

The motion after-effect can be explained by Sutherland (1961)'s Ratio Model. He theorized that the perceived motion is determined by relative activity in pairs of neurons tuned to opposite directions. At baseline, they fire at equal rates. However, due to neural adaptation of one direction, the activity of that neuron decreases when presented with a stationary test. That neuron of the particular direction is now below baseline, causing motion to be perceived in the opposite direction.

Describe the perceptual load theory and evaluate the evidence

The perceptual load theory (Lavie, 1995) resolves the early and late selection debate by stating that late selection typically occurs under low perceptual load, while early selection typically occurs under high perceptual load. This is because a high load consumes full capacity and results in selective perception. On the other hand, low load leaves spare capacity to process distractors, resulting in non-selective perception. Therefore, it states that perception has a limited capacity but proceeds automatically on all stimuli relevant or irrelevant within its capacity. Lavie & Cox (1997) demonstrated response competition effects under different perceptual loads where participants had to search for a target that was either presented alone or with 5 other letters. They found that they were distractor interference effects in low load but not in high load. Rees et al. (1997) made participants perform either easy or a hard linguistic task on identical stimuli and they were either surrounded by moving or static distractors. They found that perceptual load reduced activity related to motion presence where activity in V5/MT was significantly reduced under high load. Such activity to moving distractors was found in conditions of low load.

Compare the theories of constructivism with the ecological approach

The theory of constructivism views perception as an inference, and that not much information reaches your brain - emphasizes on the poverty of retinal image. Helmhotz describes that perception is a probabilistic process and is constructed from sense data, based on prior knowledge. Sought to discover the underlying assumptions through the use of ambiguous stimuli. However, the ecological approach (Gibson) emphasized instead on the wealth of available information. It states that the brain picks up natural information around us to make interpretations, and there was no need for an educational guess. Proposed that the visual system evolved to detect affordances for action, however he failed to offer computational explanations as to how the brain picks up on this complicated information.

How does the visual system combine cues to depth?

The visual system combines cues to depth by putting these cues into competition e.g. when texture and binocular cues conflict on the direction of slope of hills (Hillis et al., 2004), both cues are used to form something in between Visual system can also weight by reliability (O'Brien & Johnson, 2000)

Describe the components of Fourier Analysis

There are generally four components to Fourier analysis - Fourier (1822) showed that any signal can be decomposed into individual sine waves with different properties Amplitude: gives luminance contrast - difference between light and dark regions in the scene Phase: determines the point at which variations occur in space (starting point of the cycle), measured in radians - determines the position of edges in the scene Orientation: gives orientation LOL - for 2D objects, key dimension of visual processing Frequency: determines the variations across space, reported as the number of cycles in a spatial region, captures the fine vs coarse detail in an image - gives the size Components can be summed together (Fourier Synthesis) - take a sine wave with matched spatial frequency (the fundamental), then add the odd harmonics (increase SF) but with decreasing amplitude.

Briefly summarise the theories of multisensory integration

There are generally three theories that explain multisensory integration. Visual Dominance Theory: The McGurk effect and the Ventriloquist effect demonstrate that sound and touch conform to the visual percept. The Visual Dominance Theory thus states that we can ascribe this to vision being the dominant sense in humans, thus dominating all multisensory percept. However, people have identified the double flash illusion, where a double beep causes two flashes to be seen and a single beep causing only one. This suggests that sound alters visual information and dominates vision instead. Modality Appropriateness Hypothesis: The double flash illusion demonstrated how vision can be changed by sound. This theory suggests that sound is more precise for temporal judgments so it dominates in the temporal domain, while vision is more precise for spatial judgments and so it dominates in the spatial domain. This suggests that discrepancies in stimuli are resolved in favour of the sense that it is best or most reliable in the task domain Maximum Likelihood Model: To determine the multisensory percept, cues are weighted by their reliability, thus helping to maximise precision and robustness of the resulting estimate through the combination of individual signals. The width of the estimate distribution reflects cue reliability, where higher variance equals low reliability.

Describe the types of colour blindness

There are many different types of colour blindness, and they impair colour vision in different ways. Monochromats (rare): rod monochromats involves the absence of all cones, unable to perceive and discriminate colours; cone monochromats, involves presence of only one cone Dichromats: protanopia - loss of L cones; deuteranopia - loss of M cones; tritanopia - loss of S cones Anomalous trichromacy, involves mutation of the cones, causing a shift in the wavelength of light that the cones process: protanomalous - L cones; deuteranomalous - M cones; tritanomalous - S cones There seems to be gender differences regarding the different types of colour blindness, suggesting that colour blindness might be linked to the X chromosome. There is also evidence of the possibility of human tetrachromacy, explained by anomalous gene coding of cone pigment in one X chromosome in one but not the other (12% of women)

Describe how the eye discounts eye movement to encode motion

There are two theories that suggest how the eye discounts eye movement to encode motion. The brain does this by comparing motion that should occur due to eye movement with motion that was actually seen. Sherrington proposed the Eye Muscle Signal Theory. He theorized that a muscle movement signal is sent to the eye and when the muscle moves, a muscle movement signal is sent to the brain. At the same time, retinal motion is also sent to the brain and the two inputs are compared. His theory states that when the eye moves, motion should be discounted whether or not the movement was intended. Also, when eye intends to move but can't, no movement should be seen. On the other hand, Helmholtz proposed the Efference Copy Signal Theory. He theorized that a muscle movement signal is sent to the eye together with a copy of that signal (efference signal) to the brain. The retinal motion is also sent to the brain and the two inputs are computed. His theory states that when eye moves, motion should be discounted only when you intended movement. When eye intends to move but can't, the world should seem to shift. Helmholtz's theory seems more viable, but both are able to account for motion after effects and distinguishing between the eye moving and a moving object

Describe how the eye encodes motion even though the photoreceptors can only encode light in the moment.

This can be done through the Reichardt detector which detects movement of an image across the retina. It was first proposed by Reichardt (1957; 1961) to account for motion perception in insect eyes The Reichardt detector makes use of a delay and compute procedure where the input from the receptive field where the object is first detected is delayed, and compared to a second input from another receptive field which later detects the object. The strength of the output is determined by the coherence between the signals. However, an individual Reichardt detector will also respond to stationary stimuli, thus two opposite direction Reichardt detectors need to be linked together to code motion. Different speeds and directions can be coded by different delay times and spacing between receptive fields. Nonetheless, the neural basis has actually not been found in humans and other models have been proposed.

How to measure appearance?

Three main methods: matching, scaling and point of subjective equality Matching: A simple way to measure perceived equivalence of two stimuli: ask observers to match their appearance Example: With two patches of colour, match the appearance of a narrowband yellow reference with a test patch made via superimposition of red & green lights. Allows the measurement of metamers (Stimuli that are physically dissimilar but perceptually identical) Scaling: A method to measure the perceived difference between two stimuli Formalised by Stanley Smith Stevens Show a reference stimulus and assign a value (e.g. 10), then show a test stimulus at different intensity. Observer must then assign a number to the test that is proportional to the reference e.g. if the test appears twice as bright it is reported as 20 Test a range of stimulus differences in this way Last method makes use of PSE, utilising the method of constant stimuli Plotting against percentage of responses instead of percentage correct Midpoint = PSE

Describe the Attenuation model and the evidence supporting it

Treisman (1969) described the attenuation model which reconsiders the early selection model, where instead of an all-or-nothing filter, unattended information passes through the 'filter' in an attenuated form. This model arose from a study by Treisman and Geffen (1967), which assessed unattended perception online. Subjects had to shadow the attended channel, while clicking in response to a target word in the unattended channel. They found that 87% of the attended targets were detected, but only 8% of the unattended targets were detected (this contradicted the late selection theory which states attended = unattended). The attenuation model is also able to explain previous findings: e.g. cocktail party effect (Moray, 1959), notices salient unattended information such as your name, which has a lower threshold for identification.

Explain what is the dual-stream hypothesis in vision and provide some evidence

Ungerleider and Mishkin (1982) proposed there were separate parallel pathways in the brain that process different information Dorsal stream (where): contains information about location, movement Ventral stream (what): contains information about object recognition, colour perception Zeki (1976) demonstrated specialised populations of neurons that correspond to different features in the visual world e.g. V2 and V3 for complex features, V4 for colour constancy, V5 for motion. Electrical stimulation of MT leads to bias in motion perception towards the preferred direction of the neuron. Evidence has also shown that it cares about coherence rather than just direction. Also presence of regions that respond selectively to complex visual stimuli (Grill-Spector, 2003) e.g. fusiform face area (FFA)

What viewpoint dependent models are and what their pros and cons are

Viewpoint dependent models suggest an egocentric frame of reference when recognising objects. Tarr & Pinker (1989) studied this proposal by teaching observers to name several novel shapes appearing at select orientations. They found that observers exhibited a significant cost—in both response times and error rates—when recognizing trained shapes in new orientations and that these costs were systematically related to the distance from a trained view. Demonstrated viewpoint invariance. Initially some thought it was due to mental rotation, however, Gautier et al. (2002) demonstrated using fMRI, localized regions of the dorsal pathway responded in a viewpoint-dependent manner during mental rotation tasks, while, in contrast, localized regions of the ventral pathway responded in a viewpoint-dependent manner during object recognition tasks. Bulthoff and Edelman (1992) found the best recognition performance for unfamiliar viewpoints between trained views; poorer performance for viewpoints outside of trained views, but along the same axis; and the poorest performance for viewpoints along the orthogonal axis. Riesenhuber and Poggio (1999) proposed the viewpoint dependent model that suggests an association of an image with the object occurs by first deriving the viewpoint dependent shape representations, then match the image to the store standard view learnt from a few orientations. This explains canonical views which are the quickest to recognise, greater deviation from learnt views also led to more errors. Problems: more memory intensive than Geon models, and there is evidence that IT might store simplified object parts rather than whole complex views.

Why is vision for action challenging and what is the evidence

Vision for action is challenging due to the numerous degrees of freedom associated with it, whereby there are many different movements to get to the same action goal. This occurs due to kinetic redundancy, where there are many ways to move one muscle. While this leads to flexibility and resilience to perturbation, it requires a computationally complex selection process. In addition, it also requires visuomotor transformation, where one must reference their own body parts to the object. Evidence for the difficulty in vision for action can be seen by investigating end-state comfort. Javanovic and Schwartzer (2011) tested this by asking participants to place a bar in the hole to turn on the lights. The standard way was to do it with their thumbs up, while the manipulation requires the use of thumbs down to achieve end state comfort. This involves prior visual analysis and planning in which an uncomfortable initial action is initiated to minimize the total cost of action. They found that only children from the age of 3 demonstrate efficient planning of end-state comfort, and such an ability is not very amenable to training.

Describe visuomotor transformation in infants

Visuomotor transformation in infants is thought to be difficult as infants only feel touch and hear sounds in the utero but are unable to correlate this with visual input. Thus, to reach for an object that they feel on their arm, they need to learn to link what they see with where their body parts are in space. Bremner et al. (2008) found that when a 6-month-old infant's arm is stimulated, they tend to look at the stimulated hand only when their hands are uncrossed. When they are crossed, there is no significant difference. Such an ability is only found in 10-month-old infants. This demonstrates evidence of linking body position to visual space, and that they can recode into a different frame of reference at 10 months

Do blind individuals show improved or impaired perception?

While there are numerous demonstrations of more sensitive auditory and tactile senses amongst blind individuals (Theoret et al., 2004), studies on cross sensory calibration suggests that an impaired vision might also impair other senses. Kolarik et al. (2016) found that auditory spatial representations of the world are compressed in blind humans. They measured auditory space and time perception in blind and sighted subjects, and found that blind adults are less sensitive in auditory space perception but not time perception. This could be attributed to the lack of vision to calibrate the auditory sense in spatial judgments.

What do inversion effects tell us about face processing?

Yin (1969) found that it was more difficult to recognise and remember faces when they were upside down as compared to upright. This suggested that this deficit is disproportionate for faces as no inversion effect was found in recognition for pictures of houses or airplanes. However, proponents of the expertise hypothesis claim that inversion effects of faces occur because of disproportion experience with facial stimuli compared to other objects. They believe that with extensive practice making subtle intra-category discrimination for other object types, then orientation will become critical. Diamond and Carey (1986) tested dog breeders/judges on their recognition and found inversion effects. However, McKone et al., (2007) replicated the experiment using dog and handwriting experts, and failed to replicate this effect. They believed that it was possible that familiarity with the pictured dogs artificially boosted their memory for upright pictures.

Describe the evidence for the existence of voice cells and its properties

fMRI studies have demonstrated that we have preferences of voices over other natural sounds for humans (Belin et al., 2000). Extracellular electrophysiology (Perrodin et al., 2011) techniques targeting the anterior fMRI voice region and listening to individual brain cells show that 'voice' cells with functional specialization for conspecific vocalizations exist, as they show a categorical preference for voices. Voice cells were also found to display high selectivity to individual voice stimuli, where cells only respond strongly to 21% of presented voices. Furthermore, Perrodin et al. (2014) used neuronal coding strategies to demonstrate response sensitivity of voice area neurons to different vocal features of the same individual (coo vs. grunt of a monkey).


संबंधित स्टडी सेट्स

Amphipathic molecules; Components of blood

View Set

BUS 1A Chapter 3 Financial Accounting

View Set

WEB 2 CERTIFICATION "study guide"

View Set

Cloud, Containers, and Virtualization Chapter 13

View Set

GENERAL CONCEPTS AND PRINCIPLES OF TAXATION

View Set

Chapter 11 (communication channels)

View Set

Principals of Marketing 240.60 Chapter 2

View Set