psych.

Ace your homework & exams now with Quizwiz!

dual code theory,

According to dual code theory, hypothesized by Allan Paivio in 1971, people perceive verbal and nonverbal stimuli through their sensory systems (eyes and ears). From here the information is passed into the verbal and nonverbal associative systems, that work independently or together to produce verbal and nonverbal responses.

PQ4R

PQ4R encourage deeper and more elaborate processing of textual material; preview, question, read, reflect, recite, review

Schema

Schema an organized cluster of information about a particular topic; useful in making inferences

fovea

Area of the retina specialized for high acuity in the center of the macula; contains a high density of cones and few rods.

cones

Color receptors

Concept formation

Concept formation is a process where the individual learns to form concepts by sorting and classifying entities experienced in the world, based on their defining characteristics.

sensory memory

During every moment of an organism's life, sensory information is being taken in by sensory receptors and processed by the nervous system. Sensory information is stored in sensory memory just long enough to be transferred to short-term memory.[1] Humans have five traditional senses: sight, hearing, taste, smell, touch. Sensory memory (SM) allows individuals to retain impressions of sensory information after the original stimulus has ceased.[2] A common demonstration of SM is a child's ability to write letters and make circles by twirling a sparkler at night. When the sparkler is spun fast enough, it appears to leave a trail which forms a continuous image. This "light trail" is the image that is represented in the visual sensory store known as iconic memory. The other two types of SM that have been most extensively studied are echoic memory, and haptic memory; however, it is reasonable to assume that each physiological sense has a corresponding memory store. Children for example have been shown to remember specific "sweet" tastes during incidental learning trials but the nature of this gustatory store is still unclear.[3] SM is considered to be outside of cognitive control and is instead an automatic response. The information represented in SM is the "raw data" which provides a snapshot of a person's overall sensory experience. Common features between each sensory modality have been identified; however, as experimental techniques advance, exceptions and additions to these general characteristics will surely evolve. The auditory store, echoic memory, for example, has been shown to have a temporal characteristic in which the timing and tempo of a presented stimulus affects transfer into more stable forms of memory.[4] Four common features have been identified for all forms of SM:[4] The formation of a SM trace is only weakly dependent on attention to the stimulus.[5] The information stored in SM is modality specific. This means for example, that echoic memory is for the exclusive storage of auditory information, and haptic memory is for the exclusive storage of tactile information. Each SM store represents an immense amount of detail resulting in very high resolution of information. Each SM store is very brief and lasts a very short period of time. Once the SM trace has decayed or is replaced by a new memory, the information stored is no longer accessible and is ultimately lost. All SM stores have slightly different durations which is discussed in more detail on their respective pages. It is widely accepted that all forms of SM are very brief in duration; however, the approximated duration of each memory store is not static. Iconic memory, for example, holds visual information for approximately 250 milliseconds.[6] The SM is made up of spatial or categorical stores of different kinds of information, each subject to different rates of information processing and decay. The visual sensory store has a relatively high capacity, with the ability to hold up to 12 items.[7] Genetics also play a role in SM capacity; mutations to the brain-derived neurotrophic factor (BDNF), a nerve growth factor, and N-methyl-D-aspartate (NMDA) receptors, responsible for synaptic plasticity, decrease iconic and echoic memory capacities respectively.[8][9] Types Iconic memory Echoic memory represents SM for the auditory sense of hearing. Auditory information travels as sound waves which are sensed by hair cells in the ears. Information is sent to and processed in the temporal lobe. The echoic sensory store holds information for 2-3 seconds to allow for proper processing. The first studies of echoic memory came shortly after Sperling investigated iconic memory using an adapted partial report paradigm.[13] Today, characteristics of echoic memory have been found mainly using a mismatch negativity (MMN) paradigm which utilizes EEG and MEG recordings.[14] MMN has been used to identify some of the key roles of echoic memory such as change detection and language acquisition. Change detection, or the ability to detect an unusual or possibly dangerous change in the environment independent of attention, is key to the survival of an organism.[14] One study focusing on echoic sensory changes suggested that when a sound is presented to a subject, it is enough to shape an echoic memory trace that can be compared to a physically different sound. Change-related cortical responses were detected in the superior temporal gyrus using EEG [15]. With regards to language, a characteristic of children who begin speaking late in development is reduced duration of echoic memory.[16] In short, "Echoic memory is a fast-decaying store of auditory information."[17] In the case of damage to or lesions developing on the frontal lobe, parietal lobe, or hippocampus, echoic memory will likely be shortened and/or have a slower reaction time.[18] Haptic memory

unilateral visual neglect

Hemispatial neglect, also called hemiagnosia, hemineglect, unilateral neglect, spatial neglect, contralateral neglect, unilateral visual inattention,[1] hemi-inattention,[1] neglect syndrome or contralateral hemispatialagnosia, is a neuropsychological condition in which, after damage to one hemisphere of the brain is sustained, a deficit in attention to and awareness of one side of the field of vision is observed. It is defined by the inability of a person to process and perceive stimuli on one side of the body or environment, where that inability is not due to a lack of sensation.[1] Hemispatial neglect is very commonly contralateral to the damaged hemisphere, but instances of ipsilesional neglect (on the same side as the lesion) have been reported.[ Hemispatial neglect results most commonly from strokes and brain unilateral injury to the right cerebral hemisphere, with rates in the critical stage of up to 80% causing visual neglect of the left-hand side of space. Neglect is often produced by massive strokes in the middle cerebral artery region and is variegated, so that most sufferers do not exhibit all of the syndrome's traits.[3] Right-sided spatial neglect is rare because there is redundant processing of the right space by both the left and right cerebral hemispheres, whereas in most left-dominant brains the left space is only processed by the right cerebral hemisphere. Although it most strikingly affects visual perception ('visual neglect'), neglect in other forms of perception can also be found, either alone or in combination with visual neglect.[4] For example, a stroke affecting the right parietal lobe of the brain can lead to neglect for the left side of the visual field, causing a patient with neglect to behave as if the left side of sensory space is nonexistent (although they can still turn left). In an extreme case, a patient with neglect might fail to eat the food on the left half of their plate, even though they complain of being hungry. If someone with neglect is asked to draw a clock, their drawing might show only the numbers 12 to 6, or all 12 numbers might be on one half of the clock face with the other half distorted or blank. Neglect patients may also ignore the contralesional side of their body; for instance, they might only shave, or apply make-up to, the non-neglected side. These patients may frequently collide with objects or structures such as door frames on the side being neglected.[1] Neglect may also present as a delusional form, where the patient denies ownership of a limb or an entire side of the body. Since this delusion often occurs alone, without the accompaniment of other delusions, it is often labeled as a monothematic delusion. Neglect not only affects present sensation but memory and recall perception as well. A patient suffering from neglect may also, when asked to recall a memory of a certain object and then draw said object, draw only half of the object. It is unclear, however, if this is due to a perceptive deficit of the memory (to the patient having lost pieces of spatial information of the memory) or whether the information within the memory is whole and intact but simply being ignored, the same way portions of a physical object in the patient's presence would be ignored. Some forms of neglect may also be very mild—for example, in a condition called extinction where competition from the ipsilesional stimulus impedes perception of the contralesional stimulus. These patients, when asked to fixate on the examiner's nose, can detect fingers being wiggled on the affected side. If the examiner were to wiggle his or her fingers on both the affected and unaffected sides of the patient, the patient will report seeing movement only on the ipsilesional side.[5]Brain areas in the parietal and frontal lobes are associated with the deployment of attention (internally, or through eye movements, head turns or limb reaches) into contralateral space. Neglect is most closely related to damage to the temporo-parietal junction and posterior parietal cortex.[7] The lack of attention to the left side of space can manifest in the visual, auditory, proprioceptive, and olfactory domains. Although hemispatial neglect often manifests as a sensory deficit (and is frequently co-morbid with sensory deficit), it is essentially a failure to pay sufficient attention to sensory input. Although hemispatial neglect has been identified following left hemisphere damage (resulting in the neglect of the right side of space), it is most common after damage to the right hemisphere. This disparity is thought to reflect the fact that the right hemisphere of the brain is specialized for spatial perception and memory, whereas the left hemisphere is specialized for language - there is redundant processing of the right visual fields by both hemispheres. Hence the right hemisphere is able to compensate for the loss of left hemisphere function, but not vice versa.[8] Neglect is not to be confused with hemianopsia. Hemianopsia arises from damage to the primary visual pathways cutting off the input to the cerebral hemispheres from the retinas. Neglect is damage to the processing areas. The cerebral hemispheres receive the input, but there is an error in the processing that is not well understood. Varieties Neglect is a heterogenous disorder that manifests itself radically differently in different patients. No single mechanism can account for these different manifestations.[9] A vast array of impaired mechanisms are found in neglect. These mechanisms alone would not cause neglect.[6] The complexity of attention alone—just one of several mechanisms that may interact—has generated multiple competing hypothetical explanations of neglect. So it is not surprising that it has proven difficult to assign particular presentations of neglect to specific neuroanatomical loci. Despite such limitations, we may loosely describe unilateral neglect with four overlapping variables: type, range, axis, and orientation. Type Types of hemispatial neglect are broadly divided into disorders of input and disorders of output. The neglect of input, or "inattention", includes ignoring contralesional sights, sounds, smells, or tactile stimuli. Surprisingly, this inattention can even apply to imagined stimuli. In what's termed "representational neglect", patients may ignore the left side of memories, dreams, and hallucinations. Output neglect includes motor and pre-motor deficits. A patient with motor neglect does not use a contralesional limb despite the neuromuscular ability to do so. One with pre-motor neglect, or directional hypokinesia, can move unaffected limbs ably in ipsilateral space but have difficulty directing them into contralesional space. Thus a patient with pre-motor neglect may struggle to grasp an object on the left side even when using the unaffected right arm. Range Hemispatial neglect can have a wide range in terms of what the patient neglects. The first range of neglect, commonly referred to as "egocentric" neglect, is found in patients who neglect their own body or personal space.[10] These patients tend to neglect the opposite side of their lesion, based on the midline of the body, head, or retina.[11] For example, in a gap detection test, subjects with egocentric hemispatial neglect on the right side often make errors on the far right side of the page, as they are neglecting the space in their right visual field.[12] The next range of neglect is "allocentric" neglect, where individuals neglect either their peri-personal or extrapersonal space. Peri-personal space refers to the space within the patient's normal reach, whereas extrapersonal space refers to the objects/environment beyond the body's current contact or reaching ability.[10] Patients with allocentric neglect tend to neglect the contralesional side of individual items, regardless of where they appear with respect to the viewer.[11] For example, In the same gap detection test mentioned above, subjects with allocentric hemispatial neglect on the right side will make errors on all areas of the page, specifically neglecting the right side of each individual item.[12] This differentiation is significant because the majority of assessment measures test only for neglect within the reaching, or peri-personal, range. But a patient who passes a standard paper-and-pencil test of neglect may nonetheless ignore a left arm or not notice distant objects on the left side of the room. In cases of somatoparaphrenia, which may be caused by personal neglect, patients deny ownership of contralesional limbs. Sacks (1985) described a patient who fell out of bed after pushing out what he perceived to be the severed leg of a cadaver that the staff had hidden under his blanket. Patients may say things like, "I don't know whose hand that is, but they'd better get my ring off!" or, "This is a fake arm someone put on me. I sent my daughter to find my real one." Axis Most tests for neglect look for rightward or leftward errors. But patients may also neglect stimuli on one side of a horizontal or radial axis. For example, when asked to circle all the stars on a printed page, they may locate targets on both the left and right sides of the page while ignoring those across the top or bottom. In a recent study, researchers asked patients with left neglect to project their midline with a neon bulb and found that they tended to point it straight ahead but position it rightward of their true midline. This shift may account for the success of therapeutic prism glasses, which shift left visual space toward the right. By shifting visual input, they seem to correct the mind's sense of midline. The result is not only the amelioration of visual neglect, but also of tactile, motor, and even representational neglect. Orientation An important question in studies of neglect has been: "left of what?" That is to say, what frame of reference does a subject adopt when neglecting the left half of his or her visual, auditory, or tactile field? The answer has proven complex. It turns out that subjects may neglect objects to the left of their own midline (egocentric neglect) or may instead see all the objects in a room but neglect the left half of each individual object (allocentric neglect).[13] These two broad categories may be further subdivided. Patients with egocentric neglect may ignore the stimuli leftward of their trunks, their heads, or their retinae.[13] Those with allocentric neglect may neglect the true left of a presented object, or may first correct in their mind's eye a slanted or inverted object and then neglect the side then interpreted as being on the left.[14] So, for example, if patients are presented with an upside-down photograph of a face, they may mentally flip the object right side up and then neglect the left side of the adjusted image. In another example, if patients are presented with a barbell, patients will more significantly neglect the left side of the barbell, as expected with right temporal lobe lesion. If the barbell is rotated such that the left side is now on the right side, patients will more significantly neglect the left side of the object, even though it is now on the right side of space.[14] This also occurs with slanted or mirror-image presentations. A patient looking at a mirror image of a map of the World may neglect to see the Western Hemisphere despite their inverted placement onto the right side of the map. Various neuropsychological research studies have considered the role of frame of reference in hemispatial neglect, offering new evidence to support both allocentric and egocentric neglect. To begin, one study conducted by Dongyun Li, Hans-Otto Karnath, and Christopher Rorden examined whether allocentric neglect varies with egocentric position. This experimental design consisted of testing eleven right hemispheric stroke patients. Five of these patients showed spatial neglect on their contralesional side, while the remaining six patients showed no spatial neglect.[15] During the study, the patients were presented with two arrays of seven triangles. The first array ran from southwest to northeast (SW-NE) and the second array ran from southeast to northwest (SE-NW). In a portion of the experimental trials, the middle triangle in the array contained a gap along one side. Participants were tested on their ability to perceive the presence of this gap, and were instructed to press one response button if the gap was present and a second response button if the gap was absent.[15] To test the neglect frame of reference, the two different arrays were carefully situated so that gap in the triangle fell on opposite sides of the allocentric field. In the SW-NE array, the gap in the triangle fell on the allocentric right of the object-centered axis along which the triangle pointed. In the SE-NW configuration, the gap in the triangle fell on the allocentric left of the object-centered axis.[15] Furthermore, varying the position of the arrays with respect to the participant's trunk midline was used to test egocentric neglect. The arrays were therefore presented at 0° (i.e. in line with the participant's trunk midline), at −40° left, and at +40° right.[15] Ultimately, varying the position of the array within the testing visual field allowed for the simultaneous measurement of egocentric neglect and allocentric neglect. The results of this experimental design showed that the spatial neglect patients performed more poorly for the allocentric left side of the triangle, as well as for objects presented on the egocentric left side of the body.[15] Furthermore, the poor accuracy for detecting features of the object on the left side of the object's axis was more severe when the objects were presented on the contralesional side of the body. Thus, these findings illustrate that both allocentric and egocentric biases are present simultaneously, and that egocentric information can influence the severity of allocentric neglect.[15] A second study, conducted by Moscovitch and Behrmann, investigated the reference frame of neglect with respect to the somatosensory system. Eleven patients with parietal lobe lesions and subsequent hemispatial neglect were analyzed during this experiment.[16] A double simultaneous stimulation procedure was utilized, during which the patients were touched lightly and simultaneously on the left and right side of the wrist of one hand. The patients were tested both with their palms facing down and with their palms facing up.[16] This experimental condition allowed the scientists to determine whether neglect in the somatosensory system occurs with respect to the sensory receptor surface (egocentric) or with respect to a higher-order spatial frame of reference (allocentric). The results of this experiment showed the hemispatial neglect patients neglected somatosensory stimuli on the contralesional side of space, regardless of hand orientation.[16] These findings suggest that, within the somatosensory system, stimuli are neglected with respect to the allocentric, spatial frame of reference, in addition to an egocentric, sensory frame of reference.[16] Ultimately, the discoveries made by these experiments indicate that hemispatial neglect occurs with respect to multiple, simultaneously derived frames of reference, which dictate the nature and extent of neglect within the visual, auditory, and tactile fields. Theories of neglect Researchers have argued whether neglect is a disorder of spatial attention or spatial representation.[17] Spatial attention Spatial attention is the process where objects in one location are chosen for processing over objects in another location.[9] This would imply that neglect is more intentional. The patient has an affinity to direct attention to the unaffected side.[6] Neglect is caused by a decrease in stimuli in the contralesional side because of a lack of ipsilesional stimulation of the visual cortex and an increased inhibition of the contralesional side.[18] In this theory neglect is seen as disorder of attention and orientation caused by disruption of the visual cortex. Patients with this disorder will direct attention and movements to the ipsilesional side and neglect stimuli in the contralesional side despite having preserved visual fields. The result of all of this is an increased sensitivity of visual performance in the unaffected side.[18] The patient shows an affinity to the ipsilesional side being unable to disengage attention from that side.[19] Spatial representation Spatial representation is the way space is represented in the brain.[5] In this theory it is believed that the underlying cause of neglect is the inability to form contralateral representations of space.[9] In this theory neglect patients demonstrate a failure to describe the contralesional side of a familiar scene, from a given point, from memory. To support this theory, evidence from Bisiach and Luzzatti's study of Piazza del Duomo can be considered. For the study, patients with hemispatial neglect, that were also familiar with the layout of the Piazza del Duomo square, were observed. The patients were asked to imagine themselves at various vantage points in the square, without physically being in the square. They were then asked to describe different landmarks around the square, such as stores. At each separate vantage point, patients consistently only described landmarks on the right side, ignoring the left side of the representation. However, the results of their multiple descriptions at the different vantage points showed that they knew information around the entire square, but could only identify the right side of the represented field at any given vantage point.When asked to switch vantage points so that the scene that was on the contralesional side is now on the ipsilesional side the patient was able to describe with details the scene they had earlier neglected.[19] The same patterns can be found with comparing actual visual stimuli to imaging in the brain (Rossetti et al., 2010).[20] A neglect patient who was very familiar with the map of France was asked to name French towns on a map of the country, both by a mental image of the map and by a physical image of the map. The image was then rotated 180 degrees, both mentally and physically. With the mental image, the neglect stayed consistent with the image; that is, when the map was in its original orientation, the patient named towns mostly on the East side of France, and when they mentally rotated the map they named towns mostly on the West side of France because the West coast was now on the right side of the represented field. However, with the physical copy of the map, the patient's focus was on the East side of France with either orientation. This leads researchers to believe that neglect for images in memory may be disassociated from the neglect of stimuli in extrapersonal space.[9] In this case patients have no loss of memory making their neglect a disorder of spatial representation which is the ability to reconstruct spatial frames in which the spatial relationship of objects, that may be perceived, imagined or remembered, with respect to the subject and each other are organized to be correctly acted on.[19] This theory can also be supported by neglect in dreams (Figliozzi et al., 2007).[21] The study was run on a neglect patient by tracking his eye movements while he slept, during the REM cycle. Results showed that the majority of the eye movements were aimed to his right side, indicating that the images represented in his dreams were also affected by hemispatial neglect. Another example would be a left neglect patient failing to describe left turns while describing a familiar route. This shows that the failure to describe things in the contralesional side can also affect verbal items. These findings show that space representation is more topological than symbolic.[19] Patients show a contralesional loss of space representation with a deviation of spatial reference to the ipsilesional side.[5] In these cases we see a left-right dissimilarity of representation rather than a decline of representational competence.[6]Treatment consists of finding ways to bring the patient's attention toward the left, usually done incrementally, by going just a few degrees past midline, and progressing from there. Rehabilitation of neglect is often carried out by neuropsychologists, occupational therapist, speech-language pathologists, neurologic music therapists, physical therapists, optometrists and orthoptists. Forms of treatment that have been tested with variable reports of success include prismatic adaptation, where a prism lens is worn to pull the vision of the patient towards the left, constrained movement therapy where the "good" limb is constrained in a sling to encourage use of the contralesional limb. Eye-patching has similarly been used, placing a patch over the "good" eye. Pharmaceutical treatments have mostly focused on dopaminergic therapies such as bromocriptine, levodopa, and amphetamines, though these tests have had mixed results, helping in some cases and accentuating hemispatial neglect in others. Caloric vestibular stimulation (CVS) has been shown to bring about a brief remission in some cases.[22] however this technique has been known to elicit unpleasant side-effects such as nystagmus, vertigo and vomiting.[23] A study done by Schindler and colleagues examined the use of neck muscle vibration on the contralesional posterior neck muscles to induce diversion of gaze from the subjective straight ahead. Subjects received 15 consecutive treatment sessions and were evaluated on different aspects of the neglect disorder including perception of midline, and scanning deficits. The study found that there is evidence that neck muscle stimulation may work, especially if combined with visual scanning techniques. The improvement was evident 2 months after the completion of treatment.[24] Other areas of emerging treatment options include the use of prisms, visual scanning training, mental imagery training, video feedback training, trunk rotation, galvanic vestibular stimulation (GVS), transcranial magnetic stimulation (TMS) and transcranial direct-current stimulation (tDCS). Of these emerging treatment options, the most studied intervention is prism adaptation and there is evidence of relatively long-term functional gains from comparatively short-term usage. However, all of these treatment interventions (particularly the stimulation techniques) are relatively new and randomised, controlled trial evidence is still limited. Further research is mandatory in this field of research in order to provide more support in evidence-based practice.[25] In a review article by Pierce & Buxbaum (2002), they concluded that the evidence for Hemispheric Activation Approaches, which focuses on moving the limb on the side of the neglect, has conflicting evidence in the literature.[26] The authors note that a possible limitation in this approach is the requirement for the patients to actively move the neglected limb, which may not be possible for many patients. Constraint-Induced Therapy (CIT), appears to be an effective, long-term treatment for improving neglect in various studies. However, the use of CIT is limited to patients who have active control of wrist and hand extension. Prism Glasses, Hemispatial Glasses, and Eye-Patching have all appear to be effective in improving performance on neglect tests. Caloric Stimulation treatment appears to be effective in improving neglect; however, the effects are generally short-term. The review also suggests that Optokinetic Stimulation is effective in improving position sense, motor skills, body orientation, and perceptual neglect on a short-term basis. As with Caloric Stimulation treatment, long-term studies will be necessary to show its effectiveness. A few Trunk Rotation Therapy studies suggest its effectiveness in improving performance on neglect tests as well as the Functional Independence Measure (FIM). Some less studied treatment possibilities include treatments that target Dorsal Stream of visual processing, Mental Imagery Training, and Neck Vibration Therapy.[26] Trunk rotation therapies aimed at improving postural disorders and balance deficits in patients with unilateral neglect, have demonstrated optimistic results in regaining voluntary trunk control when using specific postural rehabilitative devices. One such device is the Bon Saint Côme apparatus, which uses spatial exploratory tasks in combination with auditory and visual feedback mechanisms to develop trunk control. The Bon Saint Côme device has been shown to be effective with hemiplegic subjects due to the combination of trunk stability exercises, along with the cognitive requirements needed to perform the postural tasks.[27]

Loftus (1973)

Loftus (1973) importance of noun order; A) asked Ss to list categories that the instances belonged to; presented categories & asked for instances; B) measured reaction time in verification task (A wren is a bird vs. A bird is a wren) According to feature theory noun order should have no effect/ it did.

morphemes

Morphemes are the smallest meaningful units words can be broken into.

cell body (soma)

Neuron part that contains most of the cytoplasm and the nucleus. the center of metabolic activity in a neuron, it is where the nucleus and much of the cytoplasm are located.

dendrites

Neuron part that detects the stimulus. short branches of a neuron that receives stimuli and conduct impulses to the cell body.

rods

Night-vision receptors.

O'Craven and Kanwisher 2000;

O'Craven and Kanwisher 2000; brain areas activated by imagery correspond to brain areas by perception; parahippacampal place area, fusiform face area

Propositional representations

Propositional representations are also: Language-like only in the sense that they manipulate symbols as a language does. The language of thought cannot be thought of as a natural language; it can only be a formal language that applies across different linguistic subjects, it therefore must be a language common to mind rather than culture, must be organizational rather than communicative. Thus Mentalese is best expressed through predicate and propositional calculus.[citation needed] Made up of discrete symbols; each symbol has a smallest constituent part; i.e. limit to how far units of rep. can be broken down.[citation needed] Explicit; each symbol represents something (object, action, relation) specifically and thus explicitly.[citation needed] Grammatical; symbolic manipulation follows (requires?) syntactical rules and semantical rules.[citation needed] Abstract and amodal; symbols may represent any ideational content irrespective of which sensory modality was involved in its perception. (Unlike a pictorial representation which must be modality specific to the visual sensory mode).[citation needed] Each proposition consists of a set of predicates and arguments which are represented in the form of predicate calculus. For instance: An event; (X) John hit Chris with a unicycle, the unicycle broke, because of this John started to cry, which caused Chris to be happy. A propositional representation P [hit (John, Chris, unicycle)] Q [broke (unicycle)] R [cry (John)] S [happy (Chris)] Cause (Q,R) Cause (R,S) Each set of predicates (words like hit, broke, cry, happy are first order-predicates; Cause is a second-order predicate) and arguments (often consisting of an agent/subject (e.g. John in 'P'), a recipient/object (e.g. Chris in 'P') and an instrument (e.g. the unicycle in 'P')) are in turn manipulated as propositions: event/statement "John hit Chris with the unicycle" is represented as proposition 'P'.[citation needed] Also, features of particular objects may be characterized through attribute lists. 'John' as a singular object may have the attributes 'plays guitar', 'juggles', 'eats a lot', 'rides a unicycle' etc. Thus reference to 'John' identifies him as the object of thought in virtue of his having certain of these attributes. So in predicate calculus, if "John (F) has the property of being 'rides a unicycle' (x)" we may say salva veritate: (x)(Fx). These elements have been called semantic primitives or semantic markers/features. Each primitive may in turn form part of a propositional statement, which in turn could be represented by an abstract figure e.g. 'P'. The primitives themselves play a crucial role in categorizing and classifying objects and concepts.[citation needed] The meaningful relations between ideas and concepts expressed between and within the propositions are in part dealt with through the general laws of inference. One of the most common of these is Modus Ponens Ponendum (MPP), which is a simple inference of relation between two objects, the latter supervening on the former (P-›Q). Thus if we have two propositions (P, Q) and we assume a law of inference that relates to them both (P-›Q), then if we have P we must necessarily have Q. Relations of causation and may be expressed in this fashion, i.e. one state (P) causing (-›) another (Q)[citation needed] So a purely formal characterization of the event (X) written above in natural language would be something like: P, Q (A) Q -› R (A) Q (A1) R (2,3 MPP) R -› S (A) S (4,5 MPP)

role rehearsal

Retaining information in memory simply by repeating over and over again.

Route map

Route map path that indicates specific places but contains no 2-D information, a verbal description of a path, like an action plan Brain region activated in route-following task anterior regions, motor regions

shift from behaviorism to cognitivism

Response to behaviorism The cognitive revolution in psychology took form as cognitive psychology, an approach in large part a response to behaviorism, the predominant school in scientific psychology at the time. Behaviorism was heavily influenced by Ivan Pavlov and E. L. Thorndike, and its most notable early practitioner was John B. Watson, who proposed that psychology could only become an objective science were it based on observable behavior in test subjects. Methodological behaviorists argued that because mental events are not publicly observable, psychologists should avoid description of mental processes or the mind in their theories. However, B. F. Skinner and other radical behaviorists objected to this approach, arguing that a science of psychology must include the study of internal events.[16] As such, behaviorists at this time did not reject cognition (private behaviors), but simply argued against the concept of the mind being used as an explanatory fiction (rather than rejecting the concept of mind itself).[17] Cognitive psychologists extended on this philosophy through the experimental investigation of mental states that allow scientists to produce theories that more reliably predict outcomes. The traditional account of the "cognitive revolution", which posits a conflict between behaviorism and the study of mental events, was challenged by Jerome Bruner who characterized it as: ...an all-out effort to establish meaning as the central concept of psychology [...]. It was not a revolution against behaviorism with the aim of transforming behaviorism into a better way of pursuing psychology by adding a little mentalism to it. [...] Its aim was to discover and to describe formally the meanings that human beings created out of their encounters with the world, and then to propose hypotheses about what meaning-making processes were implicated. (Bruner, 1990, Acts of Meaning, p. 2) It should be noted however that behaviorism was to a large extent restricted to North America and the cognitive reactions were in large part a reimportation of European psychologies. George Mandler has described that evolutionary history.[18] Criticisms: Lachman, Lachman and Butterfield were among the first to imply that cognitive psychology has a revolutionary origin.[19] After this, proponents of information processing theory and later cognitivists believed that the rise of cognitivism constitutes a paradigm shift. Despite the belief, many have stated both unwittingly and wittingly that cognitive psychology links to behaviorism. Leahey said that cognitive scientists believe in a revolution because it provides them with an origin myth which constitutes a beginning that will help in legitimizing their science.[20] Others have said that cognitivism is behaviorism with a new language, slightly bent model and new concerns which aim at description, prediction and control of behavior. The change from behaviorism to cognitivism was gradual. Rather a slowly evolving science which took the origins of behaviorism and built on it.[21] The evolution and building has not stopped, see Postcognitivism. The cognitive revolution was an intellectual movement that began in the 1950s as an interdisciplinary study of the mind and its processes, which became known collectively as cognitive science. The relevant areas of interchange were between the fields of psychology, anthropology, and linguistics using approaches developed within the then-nascent fields of artificial intelligence, computer science, and neuroscience. A key goal of early cognitive psychology was to apply the scientific method to the study of human cognition by designing experiments that used computational models of artificial intelligence to systematically test theories about human mental processes in a controlled laboratory setting.[1] Important publications that triggered the cognitive revolution include psychologist George Miller's 1956 article "The Magical Number Seven, Plus or Minus Two"[1] (one of the most frequently cited papers in psychology),[2] linguist Noam Chomsky's rejection of the behaviorist approach in his 1959 review of B.F. Skinner's Verbal Behavior (1957),[3][4] and foundational works in the field of artificial intelligence by John McCarthy, Marvin Minsky, Allen Newell, and Herbert Simon, such as the 1958 article "Elements of a Theory of Human Problem Solving".[1] Ulric Neisser's 1967 book Cognitive Psychology was also a landmark contribution.[5] In the 1960s, the Harvard Center for Cognitive Studies[6] and the Center for Human Information Processing at the University of California San Diego were influential in developing the academic study of cognitive science.[7] By the early 1970s, the cognitive movement had surpassed behaviorism as a psychological paradigm,[8][9][10] and by the early 1980s, the cognitive approach had become the dominant line of research inquiry across most branches in the field of psychology.

Perceptual constancy

There is a tendency to maintain constancy (of size, color, and shape) in the perception of stimuli even though the stimuli have changed. For example, you recognize that small brownish dog in the distance as your neighbor's large golden retriever, so you aren't surprised by the great increase in size (size constancy) or the appearance of the yellow color (color constancy) when he comes bounding up. And in spite of the changes in the appearance of the dog moving toward you from a distance, you still perceive the shape as that of a dog (shape constancy) no matter the angle from which it is viewed. The tendency to perceive objects as maintaining stable properties, such as size, shape, brightness, and color, despite differences in distance, viewing angle, and lighting Color constancy Perceiving objects as the same color even though they are different shades Size constancy Perceiving objects as being about the same size when they move farther away Shape constancy Tendency to perceive objects as having a stable or unchanging shape regardless of changes in the retinal image resulting form differences in viewing angle. Depth perception The ability to see in three dimensions and to estimate distance Binocular depth cues Depth cues that depend on two eyes working together Convergence Occurs when the eyes turn inward to focus on nearby objects-the closer the object, the greater the convergence Binocular disparity (or retinal disparity) Difference between the two retinal images formed by the eyes' slightly different view of the objects focused on Monocular depth cues Depth cues that can be perceived by only one eye Types of cues Interposition When one object partly blocks your view of another, you perceive the partially blocked object as farther away Linear perspective Parallel lines that are known to be the same distance apart appear to grow closer together, or converge, as they recede into the distance Relative size Larger objects are perceived as being closer to the viewer, and smaller objects as being farther away Texture gradient Near objects appear to have sharply defined textures, while similar objects appear progressively smoother and fuzzier as they recede into the distance Atmospheric perspective Objects in the distance have a bluish tint and appear more blurred than objects close at hand Perception of motion Real motion Perceptions of motion tied to movements of real objects through space James Gibson Pointed out that our perceptions of motion appear to be based on fundamental, but frequently changing, assumptions about stability Our brains search for some stimulus in the environment to serve as the assumed reference point for stability When you're driving a car, you sense the car to be in motion relative to the outside environment Apparent motion Apparent motion Perceptions of motion that are psychologically constructed in response to various kinds of stimuli Phi phenomenon An apparent motion illusion occurring when two or more stationary lights are flashed on and off in sequence, giving the impression that one light is actually moving from one spot to the next Autokinetic illusion Perceptions of motion caused by movement of the eyes rather than objects Ambiguous figures Can be seen in different ways to make different images Best known ambiguous figure is "Old Woman/Young Woman," by E. G. Boring

allocentric

allocentric not specific to a particular viewpoint Brain region of rats that suggest an important role in maintaining an allocentric representation of the worls hippocampal region in the temporal lobe Brain imaging studies activated when humans are navigating their environment (allocentric representation) hippocampus

analogical representation

analogical representation a representation that shares some of the physical characteristics of an object

synaptic gap

Small space between terminals and dendrites

instance theories

instance theories propose that we actually store only specific instances, with the more general inferences emerging from those instances

sensory neuron

sensory neuron The stimulus is a change in the internal or external environment. carry impulses from outside and inside the body to the brain and spinal cord

Cognitive Dissonance

Cognitive Dissonance We like to have our attitudes and behaviors to be aligned If we have to make a decision and we make the decision we want to believe that our decision was right Much more sure about the product after they have bought it

Conceptual representation

Conceptual representation posterior regions of the temporal, parietal, and occipital cortices

search

Explain how a search can be inefficient. When the target and distractors in a visual search task contain the same basic features, the search is inefficient. Describe one type of visual search that is efficient. Feature search is efficient. It is defined as a search for a target defined by a single attribute such as a salient color or orientation.

top-down processing

Expectation (top-down processing) often over-rides information actually available in the stimulus (bottom-up) which we are, supposedly, attending to. A "top-down" approach is where an executive decision maker or other top person makes the decisions of how something should be done. This approach is disseminated under their authority to lower levels in the hierarchy, who are, to a greater or lesser extent, bound by them. For example, when wanting to make an improvement in a hospital, a hospital administrator might decide that a major change (such as implementing a new program) is needed, and then the leader uses a planned approach to drive the changes down to the frontline staff (Stewart, Manges, Ward, 2015).[12] Positive aspects of top-down approaches include their efficiency and superb overview of higher levels.[12] Also, external effects can be internalized. On the negative side, if reforms are perceived to be imposed 'from above', it can be difficult for lower levels to accept them (e.g. Bresser-Pereira, Maravall, and Przeworski 1993). Evidence suggests this to be true regardless of the content of reforms A top-down approach (also known as stepwise design and in some cases used as a synonym of decomposition) is essentially the breaking down of a system to gain insight into its compositional sub-systems in a reverse engineering fashion. In a top-down approach an overview of the system is formulated, specifying, but not detailing, any first-level subsystems. Each subsystem is then refined in yet greater detail, sometimes in many additional subsystem levels, until the entire specification is reduced to base elements. A top-down model is often specified with the assistance of "black boxes", which makes it easier to manipulate. However, black boxes may fail to clarify elementary mechanisms or be detailed enough to realistically validate the model. Top down approach starts with the big picture. It breaks down from there into smaller segments.[1] context, knowledge, expectations to help recognize a pattern; using context of what is around to see what word is. Brain perceives patterns as meaningful wholes (recognizing people's faces that we know) without needing to piece together their component parts; ex. singing the lyrics to a song that you know by heart.

FMSSAEP mapping

FMSSAEP mapping mapping areas of cortex with seizures. The process is usually done before neurosurgeon to see which part of the brain needs to removed. the scalp is pulled back to view the brain. This procedure is non-invasive.

Feature-Comparison Model of Semantic Memory:

Feature-Comparison Model of Semantic Memory: E.E. Smith, E.J. Shoben and L.J. Rips postulated a theory in which emphasis was laid on semantic features. Their assumption was that there are two distinct types of features. First, there are those features which are essential aspects of the item's meaning. These are known as defining features. The second type of features do not form any part of the item's definition but are nonetheless descriptive of the item and are referred to as characteristic features. For instance, if we take the word Robin, there are some features true to Robins, such as that they are 'living', have 'feathers', have 'wings' and have 'red-breasts'. All these are defining features. Other features, however, may be associated with robins, but they are not necessary to define a robin. These include features such as 'like to perch on trees', 'undomesticated', 'harmless' and 'smallish'. In situations where a subject must decide whether an instance belongs to a specific category (for example, deciding whether a robin is a bird), it is assumed that the set of features corresponding to the instance and category are partitioned into the two sub-sets corresponding to defining and characteristic features. Figure 10.10 illustrates the above features. Now this process of verifying whether an instance belongs to a category, i.e. in this case 'is a robin a bird?' is assumed to be accomplished in two major stages as given in the figure. The first stage involves a comparison of both the defining and the characteristic features of the instance and the category to determine the degree to which the two sets of features are similar. If there is a high degree of correspondence between the instance features and the category features, the subject says "yes" immediately. If the two sets of features have very little correspondence (low similarity), the subject can say 'no' immediately. However, if there is an intermediate level of similarity between the features of the instance and the features of the category, then a second stage is needed before the subject can reach a decision. In the second stage, the subject compares only the defining features of instance and then a 'yes' response is made, otherwise the subject says 'no'. Smith et al. extended their model further by including the concept called typicality effect. When a subject is asked to verify whether an instance belongs to a category, say birds, one is consistently faster in verifying some instances, for example, robin, canary, than chicken. The faster instances are those that are judged by other independent subjects to be more typical of the category. If the instance to be verified is highly typical of the category, the two share a large number of features, both defining and characteristic. When it is discovered during stage one that the instance and category have largely overlapping features, the subject can make an immediate response without executing stage two. For atypical instances in contrast there is not much overlap in terms of the characteristic features. Stage two must, therefore, be executed and response-time is accordingly longer. Though these models have been built on highly scientific lines with detailed analysis, they are not free from certain limiting factors. Rips Shoben and Smith criticising Collins and Quillian pointed out that most of the college students know what a mammal is and if we add this concept to a hypothetical network that contains collie (a dog of specific breed), dog and animal, it is placed between dog and animal. In a semantic hierarchy, mammal is closer than animal to either dog or to some particular type or breed of dog (for example, collie). According to the Collins and Quillian model a person should answer the question "Is a collie a mammal?" faster than the question: "Is a collie an animal?" They found that people do not react as predicted by Collins and Quillian. Similarly, people take longer to answer the question "Is a potato a root?" even though vegetable is logically closer to potato in a semantic hierarchy. The concept of cognitive economy was criticised by Conrad. She simply asked subjects to describe a canary as a bird, an animal and so on. She then tabulated the frequency with which various properties were mentioned. It turned out that the properties frequently associated with canary (such as the fact that they are yellow) were the properties presumed by Collins and Quillian to be stored directly at the canary node whereas the properties that Conrad found to be less frequent were presumed by Collins and Quillian to be stored with bird or with animal. She concluded that property frequency rather than the hierarchical distance determines the retrieval-time. The active structural network model has been criticised on the grounds that it expresses semantic memory through a gigantic network which is so expansive that the underlying conceptual framework cannot be presented in a representational system. Collins' criticism against the feature comparison model is that the distinction between defining and characteristic features poses an inherent difficulty - there is no feature that is absolutely necessary to define something. For example, if a person removes the wings of a bird, it does not cease to be a bird. If the feathers are plucked from a robin, it does not stop being a robin. Furthermore, people do not appear to be able to make consistent decisions as to whether a feature is defining or characteristic. Is "having four legs" a defining feature of tables? What if you see a table-like object with only three legs? Do you still call it a table? Smith and his co-workers realised the meaning underlying the questions but continued to maintain this artificial distinction between defining and characteristic features. With all these loopholes, we still see the contribution of these models to various fields of human and material world as something incredible. There are a few other models like the Human Associative Model propounded by Anderson and Bruner.

fusiform face

Face recognition fusiform face area in the temporal lobe, activates when processing faces. It is activated when participants imagined a face.3 Over the past few decades there have been vast amounts of research into face recognition, specifying that faces endure specialized processing within a region called the fusiform face area (FFA) located in the mid fusiform gyrus in the temporal lobe.[44] Debates are ongoing whether both faces and objects are detected and processed in different systems and whether both have category specific regions for recognition and identification.[45][46] Much research to date focuses on the accuracy of the detection and the time taken to detect the face in a complex visual search array. When faces are displayed in isolation, upright faces are processed faster and more accurately than inverted faces,[47][48][49][50] but this effect was observed in non-face objects as well.[51] When faces are to be detected among inverted or jumbled faces, reaction times for intact and upright faces increase as the number of distractors within the array is increased.[52][53][54] Hence, it is argued that the 'pop out' theory defined in feature search is not applicable in the recognition of faces in such visual search paradigm. Conversely, the opposite effect has been argued and within a natural environmental scene, the 'pop out' effect of the face is significantly shown.[55] This could be due to evolutionary developments as the need to be able to identify faces that appear threatening to the individual or group is deemed critical in the survival of the fittest.[56] More recently, it was found that faces can be efficiently detected in a visual search paradigm, if the distracters are non-face objects,[57][58][59] however it is debated whether this apparent 'pop out' effect is driven by a high-level mechanism or by low-level confounding features.[60][61] Furthermore, patients with developmental prosopagnosia, suffering from imparied face identification, generally detect faces normally, suggesting that visual search for faces is facilitated by mechanisms other than the face-identification circuits of the fusiform face area.[62] Patients with forms of dementia can also have deficits in facial recognition and the ability to recognize human emotions in the face. In a meta-analysis of nineteen different studies comparing normal adults with dementia patients in their abilities to recognize facial emotions,[63] the patients with frontotemporal dementia were seen to have a lower ability to recognize many different emotions. These patients were much less accurate than the control participants (and even in comparison with Alzheimer's patients) in recognizing negative emotions, but were not significantly impaired in recognizing happiness. Anger and disgust in particular were the most difficult for the dementia patients to recognize.[63] Face recognition is a complex process that has many more factors that can affect one's recognition abilities. Other aspects to be considered include race and culture and their effects on one's ability to recognize faces.[64] Some factors such as the other race effect can influence one's ability to recognize and remember faces. There are so many factors, both environmental and individually internal, that can affect this task that it can be difficult to isolate and study each and every idea. What do fMRI studies involving the fusiform face area demonstrate about attention? These studies show that attentional selection can be used to perform one type of specialized processing rather than another. One study showed that the fusiform face area is especially important in the processing of faces and that the parahippocampal place area is especially important in the processing of places. If observers view an image of a face superimposed over an image of a house, the face area becomes more active when the observer is attending to the face, and the place area becomes more active when the observer is attending to the house.

nodes of ranvier

Gaps in myelin

Gestalt principles

Gestalt psychology is an approach to understanding psychological phenomena based on the premise that the 'parts' of a given behaviour are are themselves determined by the intrinsic nature of the behaviour as a whole. The Gestalt approach has provided a foundation for the modern study of perception, and was originally founded by Kurt Koffka, Wolfgang Köhler and Max Wertheimer. Gestalt is also known as the "Law of Simplicity" or the "Law of Pragnanz" (the entire figure or configuration), which states that every stimulus is perceived in its most simple form. Gestalt theorists followed the basic principle that the whole is greater than the sum of its parts. In other words, the whole (a picture, a car) carried a different and altogether greater meaning than its individual components (paint, canvas, brush; or tire, paint, metal, respectively). In viewing the "whole," a cognitive process takes place - the mind makes a leap from comprehending the parts to realizing the whole, We visually and psychologically attempt to make order out of chaos, to create harmony or structure from seemingly disconnected bits of information. The prominent founders of Gestalt theory are Max Wertheimer, Wolfgang Kohler, and Kurt Koffka. 1.Law of similarity: The law of similarity suggests that things similar things tend to appear grouped together. Grouping can occur in both visual and auditory stimuli. In the image above, for example, you probably see the groupings of colored circles as rows rather than just a collection of dots. 2.Law of pragnanz: The word pragnanz is a German term meaning "good figure." The law of Pragnanz is sometimes referred to as the law of good figure or the law of simplicity. This law holds that objects in the environment are seen in a way that makes them appear as simple as possible. You see the image above as a series of overlapping circles rather than an assortment of curved, connected lines. 3.Law of proximity: According to the law of proximity, things that are near each other seem to be grouped together. In the above image, the circles on the left appear to be part of one grouping while those on the right appear to be part of another. Because the objects are close to each other, we group them together. 4.Law of continuity: The law of continuity holds that points that are connected by straight or curving lines are seen in a way that follows the smoothest path. Rather than seeing separate lines and angles, lines are seen as belonging together. 5.Law of closure: According to the law of closure, things are grouped together if they seem to complete some entity. Our brains often ignore contradictory information and fill in gaps in information. In the image above, you probably see the shapes of a circle and rectangle because your brain fills in the missing gaps in order to create a meaningful image. 6.Law of common region: This Gestalt law of perceptual organization suggests that elements that are grouped together within the same region of space tend to be grouped together. For example, imagine that there are three oval shapes drawn on a piece of paper with two dots located at each end of the oval. The ovals are right next to each other so that the dot at the end of one oval is actually closer to the dot at the end of a separate oval. Despite the proximity of the dots, the two that are inside each oval are perceived as being a group rather than the dots that are actually closest to each other. It is important to remember that while these principles are referred as a laws of perceptual organization, they are actually heuristics, or short-cuts. Heuristics are usually designed for speed, which is why our perceptual systems sometimes makes mistakes and we experience perceptual inaccuracies.

Magnetic Resonance Imaging (MRI)

Magnetic Resonance Imaging Produces detailed 3D images of brain anatomy. Has better spatial resolution than a CT scan. Advantages of MRI -does not use radiation -preferred for children -not obscured images due to bones -Can map out regions of brain say for a stroke victim -non-invasive. -has the best spacial resolution disadvantages of MRI -extremely expensive -large magnet that can injure people -loud -bad for people who are claustrophobic -not good temporal resolution

Reductionism vs. Holism

Reductionism vs. Holism Reductionism believes that behavior can be understood by breaking it down to its simplest forms. The biological perspective of psychology and behaviorism are examples of reductionist approaches to psychology. Holism believes in the subjectivity of human behavior. The humanistic perspective of psychology is an example of the holistic approach.

Representation of Knowledge

Representation of Knowledge Almost all of us have a limited capacity to represent information in memory

SPECT

SPECT dependent on blood flow (hemodynamic) poor mans PET scan. same procedure of as PET but tracer lasts about 8hrs function of the neuroimaging technique is to get a coupling decay effect of the tracer. advantages of SPECT cheaper than PET helps diagnose brain diseases good for looking at cerebral blood flow limitations of SPECT invasive tracer decays slowly radiactive material in the body for a while (8hrs)

perception

Selecting, organizing, and interpreting sensations.

Semantic networks

Semantic networks organization of facts in semantic memory and their retrieval times; the more frequently a fact about a concept is encountered, the more strongly that fact will be associated with the concept

Sensory register

Sensory register, also called sensory memory, refers to the first and most immediate form of memory you have. The sensory register is your ultra-short-term memory that takes in sensory information through your five senses (sight, hearing, smell, taste and touch) and holds it for no more than a few seconds. Entry points for raw information from the sense.

shaping

Shaping is a technique where successive approximations to a desired behaviour are reinforced until the desired behaviour is carried out.

Survey map

Survey map spatial image of the environment, like a visual image Brain region activated in way-finding task parietal cortex, hippocampus

effects of levels of processing on memory

The concept of depth is vague and cannot be observed. Therefore, it cannot be objectively measured. Eysenck (1990) claims that the levels of processing theory describes rather than explains. Craik and Lockhart (1972) argued that deep processing leads to better long-term memory than shallow processing.

terminal button

The end of an axon that contains the neurotransmitters

Kosslyn (2017)

The development and relation of mental scanning and mental rotation were examined in 4-, 6-, 8-, 10-year old children and adults (N = 102). Based on previous findings from adults and ageing populations, the key question was whether they develop as a set of related abilities and become increasingly differentiated or are unrelated abilities per se. Findings revealed that both mental scanning and rotation abilities develop between 4- and 6 years of age. Specifically, 4-year-olds showed no difference in accuracy of mental scanning and no scanning trials whereas all older children and adults made more errors in scanning trials. Additionally, the minority of 4-year-olds showed a linear increase in response time with increasing rotation angle difference of two stimuli in contrast to all older participants. Despite similar developmental trajectories, mental scanning and rotation performances were unrelated. Thus, adding to research findings from adults, mental scanning and rotation appear to develop as a set of unrelated abilities from the outset. Different underlying abilities such as visual working memory and spatial coding versus representing past and future events are discussed. rs Metrics Comments Related Content Abstract Introduction Method Results and discussion General discussion Acknowledgments Author Contributions References Reader Comments (0) Media Coverage (0) Figures Abstract The development and relation of mental scanning and mental rotation were examined in 4-, 6-, 8-, 10-year old children and adults (N = 102). Based on previous findings from adults and ageing populations, the key question was whether they develop as a set of related abilities and become increasingly differentiated or are unrelated abilities per se. Findings revealed that both mental scanning and rotation abilities develop between 4- and 6 years of age. Specifically, 4-year-olds showed no difference in accuracy of mental scanning and no scanning trials whereas all older children and adults made more errors in scanning trials. Additionally, the minority of 4-year-olds showed a linear increase in response time with increasing rotation angle difference of two stimuli in contrast to all older participants. Despite similar developmental trajectories, mental scanning and rotation performances were unrelated. Thus, adding to research findings from adults, mental scanning and rotation appear to develop as a set of unrelated abilities from the outset. Different underlying abilities such as visual working memory and spatial coding versus representing past and future events are discussed. Figures Fig 5 Fig 1 Fig 2 Table 1 Fig 3 Table 2 Fig 4 Fig 5 Fig 1 Fig 2 Table 1 Citation: Wimmer MC, Robinson EJ, Doherty MJ (2017) Are developments in mental scanning and mental rotation related? PLoS ONE 12(2): e0171762. https://doi.org/10.1371/journal.pone.0171762 Editor: Alastair Smith, University of Nottingham, UNITED KINGDOM Received: July 14, 2016; Accepted: January 25, 2017; Published: February 16, 2017 Copyright: © 2017 Wimmer et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Data Availability: Data are publicly available from the figshare repository: https://figshare.com/articles/Are_developments_in_mental_scanning_and_mental_rotation_related_/4122456 (https://doi.org/10.6084/m9.figshare.4122456.v1). Funding: This research was supported by a grant from the Economic and Social Research Council, UK (RES-000-22-4158). The funders had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing interests: The authors have declared that no competing interests exist. Introduction Mental imagery, "seeing with the mind's eye" [1] is ubiquitous in daily tasks. We use imagery to think about distances and perspectives, for example, which way is shorter to the café in town (mental scanning) or how the café's front appears when you approach it from different directions (mental rotation). Theoretically, the consensus is that mental images are "quasi-pictorial" in nature, that is, they have a depictive format as demonstrated in mental rotation and scanning paradigms [2, 3, 4, 5]; but see [6, 7] for the philosophical claim that mental images have no particular format. Despite both scanning and rotation indicating depictive mental imagery format, it is unclear whether they are related abilities. This is an important theoretical question as it would suggest they underlie a general spatial representational ability. Alternatively imagery abilities may be unrelated. To investigate their relation the current research examined developments of mental scanning and rotation in 4- to 10-year-old children versus adults. Reason to suspect a relation comes from findings that adults and children preserve spatial properties of a visually perceived scene in their mental images [2, 3, 5, 8, 9, 10, 11, 12, 13]. For example, in the famous "island task" [9] participants scan their mental image of a previously presented island map containing landmarks at different distances apart. Adults and children show a linear time-distance relation, that is, their mental scanning time is linearly related to the distances in the real space [9, 13, 14]. Adults with autism spectrum disorders also show this effect [15]. Thus, mental images incorporate the metric information present in the original scene. The depictive imagery format is also evident in mental rotation. In a typical mental rotation paradigm participants judge whether two shapes next to each other, rotated in different directions, are the same or different [5, 16]. Both children and adults show a linear increase in response time with increasing difference in rotation angle between the two [5, 12, 16, 17, 18], indicating that whole patterns are mentally rotated until both objects are aligned in orientation. Overall, evidence from both mental scanning and rotation clearly supports the notion that mental images are depictive in format in both children and adults. Direct empirical support for a relation comes from a case of left parieto-occipital lesion leading to selective impairments in both mental scanning and rotation tasks whilst image generation abilities (generating a mental image from long-term memory) are intact [19]. Additionally, damage to the vestibular system has been shown to impair both mental rotation and scanning abilities [20]. However, as it is difficult to isolate specific cortical areas relevant, it is unclear whether exactly the same areas are causally involved in both abilities. Indeed, there are differing cortical activation patterns in mental rotation and scanning tasks [21, 22], suggesting potentially independent abilities. Nevertheless, mental scanning and rotation abilities have similar developmental trajectories with considerable improvements between 4- and 6 years. Mental rotation abilities appear not to be present before 5 years [12, 13, 17, 18, 23, 24], although basic forms of mental rotation may be found in male infants[25, 26]. In particular, in mental rotation tasks, in contrast to 6-year-old children, 4-year-olds do not yet show the linear response time increase with increasing angle difference [12, 17]. Recent evidence revealed a similar developmental pattern in mental scanning. In the adapted "island task" 4-year-olds' scanning time is not linearly related to the distance in real space whereas 5-, 6-, 8-, and 10-year-olds show the effect of linearity typically found in adults [13]. Thus, both mental scanning and rotation abilities develop between the ages of 4- and 6 years. In contrast to this evidence for a relationship between the two abilities, the only developmental study that has examined the relation between mental scanning and rotation found that amongst 5-, 8- and 14-year-olds, they were unrelated [10]. Correlations were absent in all age groups except 8-year-olds [10]. Furthermore, in adulthood, there are large individual differences in imagery abilities, and participants performing a battery of imagery tasks show no relation between mental rotation linearity and mental scanning performance [27]. In ageing adults, mental rotation ability decreases significantly whereas scanning remains largely intact and is unrelated to rotation, highlighting different degrading trajectories in old age [28]. In sum, findings to date suggest that mental scanning and rotation are unrelated abilities in adulthood and old age [27, 28]. However, both have similar developmental trajectories [12, 13]. Thus, the question is whether they develop as a set of unrelated abilities per se or are initially related and become increasingly differentiated. To examine whether scanning and rotation comprise a set of related or unrelated abilities from early age we examined 4- and 6-year-olds, the age range during which these abilities develop and compared them to 8-, and 10-year-olds and adults. To assess mental rotation abilities Estes' [17] task of rotated monkey stimuli was used, which has previously shown improvements between 4- and 6-year-olds' mental rotation abilities [12, 17]. To assess mental scanning Kosslyn et al.'s [10] scanning task was adapted with concrete images where participants judged whether a probe (ball) fell on a previously presented stimulus (elephant) or was opposite it. If children scan then they should take longer in opposite (scanning) trials than no scanning trials. Wimmer et al. [13] found that 5- but not 4-year-olds show the typical linearity effect in mental scanning. The aim here is to examine whether basic forms of mental scanning are present already at 4 years when the task is simply to scan or not to scan. Concrete stimuli were used to reduce performance factors. They are easier to rotate than abstract ones [29] and object familiarity and ease of naming facilitate visual remembering [30, 31]. Given the findings that both mental scanning and rotation develop between 4- and 6 years [12, 13] and that they are unrelated in adulthood [27] and old age [28], one possibility is that imagery abilities develop as a set of related abilities but become increasingly differentiated with age. If so, then we would expect correlations between scanning and rotation at the age of 4, 6, and 8 years but not at a later age. Alternatively, if mental scanning and rotation are indeed a set of unrelated abilities then this should be evident at an early age. Thus, we should find no correlation between both tasks across all age groups. Method Participants A total of 82 children (20 4-year-olds (M = 4.9, SD = 4 months), 22 6-year-olds (M = 6.9, SD = 3 months), 20 8-year-olds (M = 8.6, SD = 4 months), 20 10-year-olds (M = 10.5, SD = 4 months)) and 20 adults (M = 21.6, SD = 89 months) took part. In each age group half of the participants were male and the other half female. Children were predominantly from a middle class background and were recruited via local primary schools in the area. Children took part if they had written parental consent and if they volunteered themselves on the day of testing. Adults signed up at the university's online participation system and received financial reimbursement. All participants were included in the research, who gave their consent to take part (adults) or had written consent from the parents and volunteered themselves on the day of testing (children). Ethical approval for this study, including its consent procedures, was obtained from the Research Ethics Committee at Warwick University. Design All participants completed both the mental rotation and scanning tasks alongside two other imagery tasks (generation and maintenance), reported in [32], either in the same session (10-year-olds and adults), across 2 sessions (6- and 8-year-olds) or across 4 sessions (4-year-olds). Task order was counterbalanced across participants. Repeated measures analyses of variance (ANOVA) were performed to examine the effect of age on task performance. Correlational analyses were performed to examine the relation between mental scanning and mental rotation performance. Materials and procedure Mental scanning. This image scanning task was adapted from Kosslyn et al.'s [10] task using concrete images and a cover story. Participants were presented with a rectangular grid, subtending about 36° by 22° of visual angle, filling the whole screen, with an elephant family (father, mother, child) located in different squares (Fig 1). The task was to judge whether the stimulus (ball) was in the opposite square to the target (elephant) (scanning trial) or on the target's square (no scanning trial). Deciding whether the stimulus fell on the opposite square requires mental scanning over a distance. The cover story was that the elephant family liked to catch the ball and could only catch the ball when it was thrown from the correct (opposite) square on the other side. The concept of "opposite" was explained with 4 video animations where the ball travelled from its square to the opposite square that was either empty or contained an elephant. Participants were also told to remember the positions of the elephants as a wizard would make them invisible and only the ball was visible. They received two practise trials with feedback to familiarise themselves with the procedure. thumbnail Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 1. Mental scanning task: study, mask, and scanning trial (left) or no scanning trial (right) with mother elephant as target. Scanning (left): "Will mum catch the ball?" No scanning (right): "Is the ball where mum is?" Correct answer in both is "yes". https://doi.org/10.1371/journal.pone.0171762.g001 Then the 3 experimental phases, study, scanning, and memory, started. Study: Participants saw the first study scenario (3 elephants in different squares) and were instructed to point to each elephant (mum, dad, baby) and remember their location. Scenarios appeared for 30 seconds bordered by flashing rotating rainbow colours to enhance visual attention and avoid boredom. Then a 20ms mask appeared and the mental scanning trials followed. A ball appeared in one of the squares and the test-question was asked: i) no scanning: "Is the ball where (e.g.) mum is?" or ii) scanning: "Will (e.g.) mum catch the ball?" After the child's "yes" or "no" response, the memory assessment followed. On the next screen, participants saw 4 different scenarios of the 3 elephants in different squares, one of which being the correct study scenario and had to select the correct one. After that the next study scenario appeared following these three phases and so forth. There were 6 different scenarios, each appearing twice with 5 intervening trials, once in a scanning and once in a no scanning trial, yielding 12 trials in total. Each elephant was the target 4 times (2 scanning/2 no scanning trials). Finally, participants received the image scanning control trials. These trials followed exactly the same pattern as image scanning, except that all elephants and ball were visible simultaneously. Participants judged whether (i) the ball was in the same location (no scanning control) or (ii) opposite (scanning control). Mental rotation. Estes' [17] task was used (Fig 2). The participants' task was to judge whether two monkeys appearing next to each other (subtending approximately 11° by 14° of visual angle), hold up the same or different arms. The left monkey was always upright and the right one was rotated clockwise. There were 28 stimuli pairs in total: Two "same" (right-right/left-left) and two "different" pairs (right-left/left-right) in 7 different angles, from 0° to 180° at 30° increments for the rotated monkey. Both stimuli remained in view, thus, no memory assessment or perception control condition was required. thumbnail Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 2. Mental rotation task stimuli. Example of a 60° (right) and 150° (left) rotation angle difference. Reprinted from [17] under a CC BY license, with permission from David Estes, original copyright 1998. https://doi.org/10.1371/journal.pone.0171762.g002 Results and discussion Outliers in response time (twice the mean in a given cell) were removed (2.62% of the data). Bonferroni post-hoc and confidence interval adjustments were used throughout. Preliminary analyses There were no effects due to gender on both accuracy and response time in rotation and scanning, neither within each age group, nor overall (all ps > .05). Therefore, gender was eliminated from subsequent analyses. To check whether even the youngest participants understood the concept of opposite in the mental scanning task, mean accuracy in perception control trials was examined. When stimuli were visible participants had no problems in judging whether elephants were able to catch the ball (scanning) or whether the ball was in one of the elephant's squares (no scanning) (4-year-olds: M = .96; 6-year-olds: M = 1, 8-year-olds: M = .99, 10-year-olds: M = 1 and adults: M = 1). Thus, even the youngest children followed the scanning instructions. Mental scanning Accuracy. Mean accuracy was examined in a 5(age group) x 2(trials: scanning versus no scanning) ANOVA where the latter variable was manipulated within participant and the former between participants. Accuracy increased with age F(4, 97) = 42.75, p < .001, ηp2 = .64 (Table 1). Adjacent age groups' comparisons revealed differences between 4- and 6-year-olds and 8- and 10-year-olds (ps < .001). Four-year-olds performed at chance, t(19) = 0, p = 1, in contrast all older ages who performed above chance (ps < .001). Moreover, accuracy was higher in no scanning trials (M = .81) compared to scanning trials (M = .70), F(1, 97) = 24.19, p < .001, ηp2 = .20. However, this was only the case for 6-, 8- and10-year-olds (ps < .02) as both 4-year-olds and adults did not perform differently in scanning and no scanning trials (ps > .70), as indicated by the age x trial type interaction, F(4, 97) = 5.68, p < .001, ηp2 = .19. The null result in adults is due to ceiling performance (Table 1). These findings suggest that accurate scanning develops between 4- and 6 years of age. thumbnail. Memory of study scenario. Correct recognition of original study scenario improved with age, F(4, 97) = 36.45, p < .001, ηp2 = .60, where adjacent age groups' comparisons show that 4-year-olds memorized fewer study scenarios (M = .40) than 6-year-olds (M = .70, p < .001). Eight-year-olds (M = .70) remembered fewer scenarios than 10-year-olds (M = .85, p = .02) who did not differ (p = .39) from adults (M = .95). However, all age groups were above chance at remembering the original study scenario (chance = .25): 4-year-olds: t(19) = 3.35, p = .003; 6-year-olds, t(21) = 13.30, p < .001; 8-year-olds: t(19) = 12.11, p < .001; 10-year-olds, t(19) = 21.89, p < .001, and adults, t(19) = 27.05, p < .001. Furthermore, memory accuracy and scanning accuracy were strongly related, r = .82, p < .001, over and above a relation with age, rpartial = .52, p < .001. That image scanning performance was correlated with memory indicates that accurate scanning was dependent on whether the scenario was correctly remembered. This finding highlights a potential role of working memory in scanning(see [33, 34, 35, 36, 37] for developments in working memory and working memory constraints). Mental Rotation Accuracy. Overall, accuracy increased with age, F(4, 101) = 26.19, p < .001, ηp2 = .52 (Table 2). Adjacent age groups' comparisons revealed that 4-year-olds (M = .34) made more errors than 6-year-olds (M = .16, p < .001). There were no differences between other adjacent age groups (ps = 1.0). All age groups, including 4-year-olds, performed above chance (ts > 6.4, ps < .001). Response time. Similarly, response times decreased with increasing age, F(4, 101) = 21.69, p < .001, ηp2 = .47, but there were no significant differences between adjacent age groups (4-year-olds: M = 5038ms; 6-year-olds: M = 4493ms; 8-year-olds: M = 3660ms; 10-year-olds: M = 2730ms; adults: M = 1900) (all ps > .19) (Fig 4). Linearity. To examine whether response times increased linearly with increasing rotation angle, linear regression was performed with the median response time for each angle as dependent variable and rotation angle as independent variable for each child. An r2 greater than .55 indicated that the slope was significantly different from zero, and thus, indicated increased response time with increasing angle. There was a main effect for age on r2, F(4, 101) = 10.46, p < .001, ηp2 = .30, where 4-year-olds (M = .25) differed significantly from all older age groups (p < .001): 6-year-olds (M = .55), 8-year-olds (M = .62), 10-year-olds (M = .65), and adults (M = .60) (Fig 4). No other differences in r2 emerged. Thus, 6-year-olds' measure of linearity (r2) was more than twice as high as 4-year-olds' and did not differ from adults', replicating previous findings [17]. There was also a significant age effect overall when examining rotators (i.e., slope is significantly different from 0) versus non-rotators; Kruskall-Wallis χ2 = 20.03, df = 4, p < .001. Specifically, there were fewer rotators in 4-year-olds (N = 15%) than all older age groups: 6-year-olds, (N = 64%, p = .002), 8-year-olds (N = 65%, p = .003), 10-year-olds (N = 80%, p < .001) and adults (N = 65%, p = .003) who did not differ (all Fisher's Exact, two-tailed). Thus, a minority of 4-year-olds and a majority as 6-year-olds can be classified as rotators in line with previous findings [17]. Correlation between mental rotation and mental scanning As both rotation and scanning follow a similar developmental trajectory (Fig 5), the question is whether both emerge as a set of related abilities. Correlational analysis of both mental rotation and scanning mean accuracy and mean response time was conducted. Accuracy. A strong relation was found between mental scanning and rotation, r = .45, p < .001, that did however, not remain robust over and above common association with age, rpartial = .15, p = .25. When examining correlation for the youngest 3 age groups separately, who did not perform at ceiling, the same pattern was found, rs < .32, ps > .14. Response time. Mental rotation and scanning response times were related, r = .36, p = .004, but this did not remain robust when age was controlled for, rpartial = .14, p = .30. When examining age groups separately, there were relations in 6-year-olds, r = .45, p = .03, 10-year-olds, r = .53, p = .02, and adults, r = .71, p = .001, but not in 4- and 8-year-olds, rs < .07, ps > .77. General discussion The current aim was to examine whether mental scanning and rotation develop as a set of related abilities underlying general spatial representation or comprise unrelated abilities. Despite similar developmental trajectories, 4- and 6-year-olds revealed no association in mental scanning and rotation abilities. Specifically, there were no associations in accuracy overall or separately for each age group. Response times where unrelated when the effect of age was controlled for. Examining individual age groups, only 6-, 10-year-olds and adults' scanning and rotation response times were related. As 10-year-olds and adults performed at ceiling no correlations on accuracy could be conducted, leaving only the response time data for interpretation. These latter results could reflect general processing speed [38] rather than related abilities [27, 28]. Given the lack of a relation in accuracy in the three youngest age groups and an absence of an overall relation in response time when age is taken into account, the current findings suggest that mental scanning and rotation are unrelated abilities from an early age. Thus, in addition to findings showing no relation between mental scanning and rotation in adulthood [21, 22, 27, 28] the current findings indicate that they are not specifically developmentally related from the outset. However, they do show similar developmental trajectories. Current findings replicated widely reported developments in mental rotation between 4- and 6 years [12, 17, 18, 23] and recent evidence of developmental trends in mental scanning [13]. Specifically, a minority of 4-year-olds showed the linear increase in response time with increasing rotation angle whereas all older age groups revealed an effect of linearity. In mental scanning, 4-year-olds performed at chance in contrast to all older age groups. However, 4-year-olds' ceiling performance in perception control trials shows that they clearly followed task instructions and were able to judge the correct endpoint of a rolling object with a horizontal trajectory. Moreover, 4-year-olds responded faster in no scanning than in scanning trials, suggesting that some rudimentary mental scanning has developed. Thus, question for the future is, what develops between 4- and 6 years allowing children to master mental scanning and rotation tasks? Both require the generation of a mental image and its maintenance in short term-memory in order to manipulate the mental image, either by shifting attention over an object or scene in the image (mental scanning) or by transforming the image (mental rotation). Basic image generation and maintenance abilities are present at 4 years [32] but we do not yet know what allows successful attention shifting and transformation. For mental scanning, children in this age range show improvements in related cognitive processes such as visual working memory [36], spatial location memory for spatial locations [39], distance coding in spatial navigation tasks [40], and distance scaling [41, 42]. One possibility is that visual working memory allows successful scanning in the first place permitting accurate responding. That is supported by the finding that memory performance was associated with scanning accuracy. Thus, working memory demands may have interfered with image scanning. Five-year-olds asked to remember the locations of filled in squares or dots within an image or a square perform at approximately 25% of adult-level [33]. Visual and spatial working memory span continue to improve from age 5 through to 11 years [34, 35, 36]. Also, visual-short term memory capacity used for visual search is dependent on both the complexity of the stimulus as well as the number of objects [37]. Thus, developments of visual-working memory may be linked to developments in mental scanning. Additionally, developing abilities in distance coding might allow children to mentally scan in a manner linearly related to the distance in real space, revealing the effect of linearity. For mental rotation, Estes [17] has shown that children who were deemed as "rotators" were more likely to explain their mental rotation processes in mental state terms (e.g., "Pretend your mind put them right side up. I turn this one around in my mind."), indicating meta-cognitive insight into their mental rotation. Interestingly, mental rotation is linked to episodic recall for self-generated events [43], suggesting a link between representing past self-relevant events and representing the end position of a transformed stimulus. Thus, representation of self-relevant past or future events may underlie successful rotation. This may explain why it is not related to mental scanning, which relies on developments in visual working memory and distance coding ability.

The matching hypothesis

The matching hypothesis states that people have a tendency to choose partners whose level of attractiveness they believe to be equal to their own.

primary visual cortex

primary visual cortex brain region activated during imagery Primary Visual Cortex (areas 17 & 18) in the occipital lobe. It is less clear how it is involved in mental imagery (i.e., the studies are not clear). Studies suggest activation in these early visual areas tends to emphasizes high-res details of the images and tends to focus on shape judgments. It seems that these visual regions do play a causal role in mental imagery, and temporally deactivating them results in impaired information processing.

symbolic representation

symbolic representation a type of mental representation that does not correspond to the physical characteristics for which it represents

Amodal hypothesis

Amodal hypothesis there is an abstract system that serves as an intermediate channel for converting back and forth between perceptual, verbal and motor representations

myelin

A fatty covering on neurons that speeds up the signal

Affect

Affect indicates an instinctive reaction that is given to a stimulus. An affective reaction is manifested before the cognitive processes required to form an emotion take place. Some theorists believe that an affective reaction is the result of a prior cognitive processing of information, and that our likes and dislikes, and feelings of pleasure and displeasure are based on cognitive thought process.

mental imagery

Brain regions involved Mental Imagery the parietal region is involved (such as attending to locations and objects, also involved in mental rotation Mental imagery (varieties of which are sometimes colloquially refered to as "visualizing," "seeing in the mind's eye," "hearing in the head," "imagining the feel of," etc.) is quasi-perceptual experience; it resembles perceptual experience, but occurs in the absence of the appropriate external stimuli.Nov 18, 1997 A mental image or mental picture is the representation in a person's mind of the physical world outside that person.[1] It is an experience that, on most occasions, significantly resembles the experience of perceiving some object, event, or scene, but occurs when the relevant object, event, or scene is not actually present to the senses.[2][3][4][5] There are sometimes episodes, particularly on falling asleep (hypnagogic imagery) and waking up (hypnopompic), when the mental imagery, being of a rapid, phantasmagoric and involuntary character, defies perception, presenting a kaleidoscopic field, in which no distinct object can be discerned.[6] Mental imagery can sometimes produce the same effects as would be produced by the behavior or experience imagined.[7] The nature of these experiences, what makes them possible, and their function (if any) have long been subjects of research and controversy[further explanation needed] in philosophy, psychology, cognitive science, and, more recently, neuroscience. As contemporary researchers[Like whom?] use the expression, mental images or imagery can comprise information from any source of sensory input; one may experience auditory images,[8] olfactory images,[9] and so forth. However, the majority of philosophical and scientific investigations of the topic focus upon visual mental imagery. It has sometimes been assumed[by whom?] that, like humans, some types of animals are capable of experiencing mental images.[10] Due to the fundamentally introspective nature of the phenomenon, there is little to no evidence either for or against this view. Philosophers such as George Berkeley and David Hume, and early experimental psychologists such as Wilhelm Wundt and William James, understood ideas in general to be mental images. Today it is very widely believed[by whom?] that much imagery functions as mental representations (or mental models), playing an important role in memory and thinking.[11][12][13][14] William Brant (2013, p. 12) traces the scientific use of the phrase "mental images" back to John Tyndall's 1870 speech called the "Scientific Use of the Imagination". Some have gone so far as to suggest that images are best understood to be, by definition, a form of inner, mental or neural representation;[15][16] in the case of hypnagogic and hypnapompic imagery, it is not representational at all. Others reject the view that the image experience may be identical with (or directly caused by) any such representation in the mind or the brain,[17][18][19][20][21][22] but do not take account of the non-representational forms of imagery. In 2010, IBM applied for a patent on a method to extract mental images of human faces from the human brain. It uses a feedback loop based on brain measurements of the fusiform face area in the brain that activates proportionate with degree of facial recognition.[23] It was issued in 2015.[24] Common examples of mental images include daydreaming and the mental visualization that occurs while reading a book. Another is of the pictures summoned by athletes during training or before a competition, outlining each step they will take to accomplish their goal.[25] When a musician hears a song, he or she can sometimes "see" the song notes in their head, as well as hear them with all their tonal qualities.[26] This is considered different from an after-effect, such as an after-image. Calling up an image in our minds can be a voluntary act, so it can be characterized as being under various degrees of conscious control. According to psychologist and cognitive scientist Steven Pinker,[27] our experiences of the world are represented in our minds as mental images. These mental images can then be associated and compared with others, and can be used to synthesize completely new images. In this view, mental images allow us to form useful theories of how the world works by formulating likely sequences of mental images in our heads without having to directly experience that outcome. Whether other creatures have this capability is debatable. There are several theories as to how mental images are formed in the mind. These include the dual-code theory, the propositional theory, and the functional-equivalency hypothesis. The dual-code theory, created by Allan Paivio in 1971, is the theory that we use two separate codes to represent information in our brains: image codes and verbal codes. Image codes are things like thinking of a picture of a dog when you are thinking of a dog, whereas a verbal code would be to think of the word "dog".[28] Another example is the difference between thinking of abstract words such as justice or love and thinking of concrete words like elephant or chair. When abstract words are thought of, it is easier to think of them in terms of verbal codes—finding words that define them or describe them. With concrete words, it is often easier to use image codes and bring up a picture of a human or chair in your mind rather than words associated or descriptive of them. The propositional theory involves storing images in the form of a generic propositional code that stores the meaning of the concept not the image itself. The propositional codes can either be descriptive of the image or symbolic. They are then transferred back into verbal and visual code to form the mental image.[29] The functional-equivalency hypothesis is that mental images are "internal representations" that work in the same way as the actual perception of physical objects.[30] In other words, the picture of a dog brought to mind when the word dog is read is interpreted in the same way as if the person looking at an actual dog before them. Research has occurred to designate a specific neural correlate of imagery; however, studies show a multitude of results. Most studies published before 2001 suggest neural correlates of visual imagery occur in brodmann area 17.[31] Auditory performance imagery have been observed in the premotor areas, precunes, and medial brodmann area 40.[32] Auditory imagery in general occurs across participants in the temporal voice area (TVA), which allows top-down imaging manipulations, processing, and storage of audition functions.[33] Olfactory imagery research shows activation in the anterior piriform cortex and the posterior piriform cortex; experts in olfactory imagery have larger gray matter associated to olfactory areas.[34] Tactile imagery is found to occur in the dorsolateral prefrontal area, inferior frontal gyrus, frontal gyrus, insula, precentral gyrus, and the medial frontal gyrus with basil ganglia activation in the ventral posteriomedial nucleus and putamen (hemisphere activation corresponds to the location of the imagined tactile stimulus).[35] Research in gustatory imagery reveals activation in the anterior insular cortex, frontal operculum, and prefrontal cortex.[31] Novices of a specific form of mental imagery show less gray matter than experts of mental imagery congruent to that form.[36] A meta-analysis of neuroimagery studies revealed significant activation of the bilateral dorsal parietal, interior insula, and left inferior frontal regions of the brain.[37] Imagery has been thought to cooccur with perception; however, participants with damaged sense-modality receptors can sometimes perform imagery of said modality receptors.[38] Neuroscience with imagery has been used to communicate with seemingly unconscious individuals through fMRI activation of different neural correlates of imagery, demanding further study into low quality consciousness.[39] Philosophical ideas Main article: Mental representation This section does not cite any sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (May 2013) (Learn how and when to remove this template message) Mental images are an important topic in classical and modern philosophy, as they are central to the study of knowledge. In the Republic, Book VII, Plato has Socrates present the Allegory of the Cave: a prisoner, bound and unable to move, sits with his back to a fire watching the shadows cast on the cave wall in front of him by people carrying objects behind his back. These people and the objects they carry are representations of real things in the world. Unenlightened man is like the prisoner, explains Socrates, a human being making mental images from the sense data that he experiences. The eighteenth-century philosopher Bishop George Berkeley proposed similar ideas in his theory of idealism. Berkeley stated that reality is equivalent to mental images—our mental images are not a copy of another material reality but that reality itself. Berkeley, however, sharply distinguished between the images that he considered to constitute the external world, and the images of individual imagination. According to Berkeley, only the latter are considered "mental imagery" in the contemporary sense of the term. The eighteenth century British writer Dr. Samuel Johnson criticized idealism. When asked what he thought about idealism, he is alleged to have replied "I refute it thus!"[this quote needs a citation] as he kicked a large rock and his leg rebounded. His point was that the idea that the rock is just another mental image and has no material existence of its own is a poor explanation of the painful sense data he had just experienced. David Deutsch addresses Johnson's objection to idealism in The Fabric of Reality when he states that, if we judge the value of our mental images of the world by the quality and quantity of the sense data that they can explain, then the most valuable mental image—or theory—that we currently have is that the world has a real independent existence and that humans have successfully evolved by building up and adapting patterns of mental images to explain it. This is an important idea in scientific thought.[why?] Critics of scientific realism ask how the inner perception of mental images actually occurs. This is sometimes called the "homunculus problem" (see also the mind's eye). The problem is similar to asking how the images you see on a computer screen exist in the memory of the computer. To scientific materialism, mental images and the perception of them must be brain-states. According to critics,[who?] scientific realists cannot explain where the images and their perceiver exist in the brain. To use the analogy of the computer screen, these critics argue that cognitive science and psychology have been unsuccessful in identifying either the component in the brain (i.e., "hardware") or the mental processes that store these images (i.e. "software"). In experimental psychology Cognitive psychologists and (later) cognitive neuroscientists have empirically tested some of the philosophical questions related to whether and how the human brain uses mental imagery in cognition. MR TMR.jpg One theory of the mind that was examined in these experiments was the "brain as serial computer" philosophical metaphor of the 1970s. Psychologist Zenon Pylyshyn theorized that the human mind processes mental images by decomposing them into an underlying mathematical proposition. Roger Shepard and Jacqueline Metzler challenged that view by presenting subjects with 2D line drawings of groups of 3D block "objects" and asking them to determine whether that "object" is the same as a second figure, some of which rotations of the first "object".[40] Shepard and Metzler proposed that if we decomposed and then mentally re-imaged the objects into basic mathematical propositions, as the then-dominant view of cognition "as a serial digital computer"[41] assumed, then it would be expected that the time it took to determine whether the object is the same or not would be independent of how much the object had been rotated. Shepard and Metzler found the opposite: a linear relationship between the degree of rotation in the mental imagery task and the time it took participants to reach their answer. This mental rotation finding implied that the human mind—and the human brain—maintains and manipulates mental images as topographic and topological wholes, an implication that was quickly put to test by psychologists. Stephen Kosslyn and colleagues[42] showed in a series of neuroimaging experiments that the mental image of objects like the letter "F" are mapped, maintained and rotated as an image-like whole in areas of the human visual cortex. Moreover, Kosslyn's work showed that there are considerable similarities between the neural mappings for imagined stimuli and perceived stimuli. The authors of these studies concluded that, while the neural processes they studied rely on mathematical and computational underpinnings, the brain also seems optimized to handle the sort of mathematics that constantly computes a series of topologically-based images rather than calculating a mathematical model of an object. Recent studies in neurology and neuropsychology on mental imagery have further questioned the "mind as serial computer" theory, arguing instead that human mental imagery manifests both visually and kinesthetically. For example, several studies have provided evidence that people are slower at rotating line drawings of objects such as hands in directions incompatible with the joints of the human body,[43] and that patients with painful, injured arms are slower at mentally rotating line drawings of the hand from the side of the injured arm.[44] Some psychologists, including Kosslyn, have argued that such results occur because of interference in the brain between distinct systems in the brain that process the visual and motor mental imagery. Subsequent neuroimaging studies[45] showed that the interference between the motor and visual imagery system could be induced by having participants physically handle actual 3D blocks glued together to form objects similar to those depicted in the line-drawings. Amorim et al. have shown that, when a cylindrical "head" was added to Shepard and Metzler's line drawings of 3D block figures, participants were quicker and more accurate at solving mental rotation problems.[46] They argue that motoric embodiment is not just "interference" that inhibits visual mental imagery but is capable of facilitating mental imagery. As cognitive neuroscience approaches to mental imagery continued, research expanded beyond questions of serial versus parallel or topographic processing to questions of the relationship between mental images and perceptual representations. Both brain imaging (fMRI and ERP) and studies of neuropsychological patients have been used to test the hypothesis that a mental image is the reactivation, from memory, of brain representations normally activated during the perception of an external stimulus. In other words, if perceiving an apple activates contour and location and shape and color representations in the brain's visual system, then imagining an apple activates some or all of these same representations using information stored in memory. Early evidence for this idea came from neuropsychology. Patients with brain damage that impairs perception in specific ways, for example by damaging shape or color representations, seem to generally to have impaired mental imagery in similar ways.[47] Studies of brain function in normal human brains support this same conclusion, showing activity in the brain's visual areas while subjects imagined visual objects and scenes.[48] The previously mentioned and numerous related studies have led to a relative consensus within cognitive science, psychology, neuroscience, and philosophy on the neural status of mental images. In general, researchers agree that, while there is no homunculus inside the head viewing these mental images, our brains do form and maintain mental images as image-like wholes.[49] The problem of exactly how these images are stored and manipulated within the human brain, in particular within language and communication, remains a fertile area of study. One of the longest-running research topics on the mental image has basis on the fact that people report large individual differences in the vividness of their images. Special questionnaires have been developed to assess such differences, including the Vividness of Visual Imagery Questionnaire (VVIQ) developed by David Marks. Laboratory studies have suggested that the subjectively reported variations in imagery vividness are associated with different neural states within the brain and also different cognitive competences such as the ability to accurately recall information presented in pictures[50] Rodway, Gillies and Schepman used a novel long-term change detection task to determine whether participants with low and high vividness scores on the VVIQ2 showed any performance differences.[51] Rodway et al. found that high vividness participants were significantly more accurate at detecting salient changes to pictures compared to low-vividness participants.[52] This replicated an earlier study.[53] Recent studies have found that individual differences in VVIQ scores can be used to predict changes in a person's brain while visualizing different activities.[54] Functional magnetic resonance imaging (fMRI) was used to study the association between early visual cortex activity relative to the whole brain while participants visualized themselves or another person bench pressing or stair climbing. Reported image vividness correlates significantly with the relative fMRI signal in the visual cortex. Thus, individual differences in the vividness of visual imagery can be measured objectively. Logie, Pernet, Buonocore and Della Sala (2011) used behavioural and fMRI data for mental rotation from individuals reporting vivid and poor imagery on the VVIQ. Groups differed in brain activation patterns suggesting that the groups performed the same tasks in different ways. These findings help to explain the lack of association previously reported between VVIQ scores and mental rotation performance.

Chambers and Reisberg 1985;

Chambers and Reisberg 1985; participants can interpret visual images in only one way and it is not possible to find a alternative interpretation; contrasts visual perception

selective and nonselective pathways.

Describe the difference between the selective and nonselective pathways. Early visual processing feeds into two separate pathways—a selective and a nonselective pathway. The selective pathway passes through the bottleneck of attention and thus fully processes all of the features of one or a few objects at a time. The nonselective pathway does not pass through the attentional bottleneck and instead computes the ensemble statistics of a scene rather than the details of any particular objects.

Franklin and Tversky 1990

Franklin and Tversky 1990; participants read a description of a scene; they were tested on what objects were in what specific direction

Hierarchical Network Model of Semantic Memory:

Hierarchical Network Model of Semantic Memory: This model of semantic memory was postulated by Allan Collins and Ross Quillian. They suggested that items stored in semantic memory are connected by links in a huge network. All human knowledge, knowledge of objects, events, persons, concepts, etc. are organised into a hierarchy arranged into two sets. The two sets are superordinate and subordinate sets with their properties or attributes stored. These properties are logically related and hierarchically organised. The following illustration explains the relationship between the sets - super ordinate for dog is an animal, but it is a mammal too; belongs to a group of domesticated animals, a quadruped; belongs to a category of Alsatian, hound, etc. Let us look at Collins and Quillian study as an example for a better understanding of this model. In this hierarchically organised structure one can see that the superordinate of canary is bird, of shark is fish and the superordinate of fish is animal. One can notice further that a property characterizing a particular class of things is assumed to be stored only at the place in the hierarchy that corresponds to that class. This assumption forms the basis of the cognitive economy. ADVERTISEMENTS: For example, a property that characterizes all types of fish (the fact that they have gills and can swim) is stored only at the level of fish. It should be noted that gills and other such features are not stored again with the different types of fish (salmon, shark, etc.) even though they have gills. Similarly, a bird which is the superordinate of canary is an animal. Specific properties are stored only at appropriate levels in the hierarchy. Given this hypothesized network structure, Collins and the Quillian's next task was to determine how information is retrieved from the network. To answer this question an experiment was carried out in which subjects were asked to answer 'yes' or' no' to simple questions. Consider, for example, the following questions about canaries: 1. Does a canary eat? 2. Does a canary fly? 3. Is a canary yellow? The three questions mentioned above may be challenged by the semantic level at which the information needed to answer them is stored. Consider the first question, "Does a canary eat?" The information "eats" is stored at the level of animal, two levels away from canary. Likewise, the information has "wings" and is "yellow" (needed to answer the second and third questions) are stored at one and zero levels away from canary, respectively. The major point of interest in this model of Collins and Quillian was the reaction-time or time taken to respond to the questions. Results of the experiment revealed that with the increasing level of information it takes increasing amounts of time to retrieve the information. Their explanation about this is as follows- in order to answer the third question, the subject must first enter the level in memory that corresponds to 'canary' and here find the information that canaries are yellow. The question is, therefore, answered relatively fast. To answer the second question the subject still enters the memory level that corresponds to 'canary' but does not find any information at that level concerning whether or not canaries fly. However, the subject moves up the hierarchy to the level where information about birds is stored and there finds that birds fly. This is done by combining the information that canaries are birds and that birds fly and then the question can be answered. Due to the extra step of moving up the hierarchy, question two takes somewhat longer to answer than question three. The first question takes even longer for the same sort of reason. To answer question one, the subject cannot use any of the information that is stored at either the level of 'canary' or 'bird' but must move up to an additional level in the hierarchy to 'animal'. Thus, it was concluded that, because a canary is a bird and a bird is an animal and animals eat, the canary must eat too. Therefore, the reason why some questions take longer to answer than others is that some questions require more travelling in our memory from level to level in the semantic hierarchy. Using a similar rationale Collins and Quillian predicted that it takes less time to answer "Is a canary a bird?" than to answer "Is a canary an animal?" We see in the figure that to answer the latter question, a subject must move up two levels from canary to animal, whereas to answer the former question, the subject must move up only one level. It was revealed that on an average, people take about 75 milliseconds longer to answer the question, "Does a canary eat?" than to answer, "Does a canary fly?" and about 75 milliseconds longer to answer the question about flying than to answer, "Is a canary yellow?" Graphical representation of Travelling from One Level of Memory to Another in the Semantic Hierarchy 2. Active Structural Network - Model of Semantic Memory: The active structural network model postulated by Norman & Lindsy can be understood by their analysis of two simple sentences. Let us now see how they go about explaining it. Peter put the package on the table. Because it wasn't level, it slid off. These sentences refer to objects, person and events. Figure 10.9 shows the diagrammatic sketch representing information in a semantic network. This network consists of information expanded in terms of events, instances of the movements involved or modes of their relations, the direction of the relationship, etc. This elaborate network representation is said to form the basis of human memory. Let us consider the figure for a moment. The basic conceptual information shows that Peter caused the package to move from its earlier location to the top of the table, and that gravity was the causal agent that then acted upon the package causing it to move from the table top to the floor. The first movement is represented by a node, the oval numbered. The oval (or words in the figure) are called relations. The relations show how the different node structures in the figure are related to one another. Thus, looking at the node we see that it represents an instance of the act of 'move'. This particular instance of 'move' has its cause - Peter (shown diagrammatically) and the object being moved is package (again shown diagrammatically). The location to which the moved object is placed is the table. The second node, the oval labelled 2, is another instance of 'move'. Here the cause is gravity, the object is the same, i.e. the package, and the movement takes place from a 'From' location, (the table-top) to a 'To' location (the floor). The drawings of the package and Peter are instances of the nodes that are named "package" and "Peter". The representation shown and described can further be elaborated. Peter put a package on the table, an event of which Peter was the agent, caused the result that causes the package to change its location from place unspecified to a new place, on top of the table. It changed its place because the first position was higher than the second position. Moreover, the movement was caused by the force of gravity. In a similar fashion detailed analysis can be carried on and on. But the conceptual network presented here is assumed to be sufficient enough to give us an idea about how words and events create relationships, concepts, etc. and form a complex network. Thus, one can see that this model of semantic memory conceives of human memory as a giant network of interconnected nodes, and these nodes are assumed to correspond to individual concepts, ideas, or events in the system.

Leon Festinger and Merrill Carlsmith (1959)

Leon Festinger and Merrill Carlsmith conducted an experiment in 1959 in order to demonstrate the phenomenon of cognitive dissonance. Students were asked to perform a boring task and then to convince someone else that it was interesting. The researchers theorized that people would experience a dissonance between the conflicting cognitions, "I told someone that the task was interesting", and "I actually found it boring."

Moyer 1973;

Moyer 1973; speed with which subjects could judge the relative size of two animals from memory; larger the disparity, less time to determine (negative correlation)

phonemes

Phonemes are the smallest distinctive sound units of a language.

Temporal Cortex

Temporal Cortex includes the hippocampus storage of new memories

compute the ensemble statistics of a scene?

What does it mean to compute the ensemble statistics of a scene? The ensemble statistics of a scene are the distribution of properties such as orientation, size, or color over a set of objects in a scene. Importantly, the distribution of these properties is represented holistically—as an ensemble—giving the observer a global description of the entire scene rather than a local description of any of the particular elements in the scene. Such ensemble statistics can be used to quickly process important properties of the world, such as scene gist and layout.

Wanner 1968;

Wanner 1968; memory for meaning is equally good whether people are warned or not but memory for stylistic changes did improve with warning because we pay closer attention

associative links

associative links connections in memory that tie one memory, or concept, to another

central executive

central executive system for controlling systems like the visuospatial sketchpad and the phonological loop; attentional control mechanism for working memory; HIGHLY RELIED ON FOR USE OF LANGUAGE

parahippocampal place area

parahippocampal place area region of the temporal cortex responds to pictures of locations

partial report procedure

partial report procedure participants are cued to report only some of the items in a display; leads to a more accurate measure of visual sensory memory

strucutal diffusion tensor imaigng DTI*

strucutal diffusion tensor imaigng DTI** determines orgin of white matter tracts. measures diffusion of water molecules through axons. good for viewing head injury trauma(back and forth motion could have internal damages especially after a major car accident advantages of DTI map out myelinated axons white matter origin helps researchers understand different areas function due to white matter connectivity. shearing of white matter connections(head trauma)

the interference theory

the interference theory which hypothesizes that memory is lost due to proactive and retroactive interference of new information. These two concepts are elaborated and compared here. The interference theory works along the same rationale as that of the information processing theory, and exhibits two main types―retroactive and proactive.

visual sensory store

visual sensory store memory system that can effectively hold all the information in the visual display; aka iconic memory

whole report procedure

whole report procedure participants are asked to report all the items of a display

Palmer

context influences object recognition, subjects presented with contextual scene (or none), subjects then presented an object to identify, scene context aided recognition of objects that would belong in that scene, hurt recognition of objects that would not appear in that scene

cognitive theory

The cognitive theory describes behavior in terms of the flow of information. Cognitive science is used to analyze how the brain processes information. Trying to understand the surroundings and making sense of it, is cognition.

conative part of the brain

The word 'conation' comes from the Latin word conatus which means a natural tendency or impulse. The conative part of our mind determines how we act on our thoughts and emotions. Our instinctive style of acting is known as our conative style.

interneuron

connect sensory neurons and motor neurons and carry impulses between them. They are concentrated in the brain and spinal cord. The stimulus is a neurotransmitter (chemical) released from a sensory neuron or another interneuron. Found in brain and spinal cord.

receptor

A small area on the dendrite that receives the signal from the other neuron

Automaticity

Automaticity is a term referring to processes, reactions or behaviours which have become faster and require less or no conscious effort as a result of practice.

sustained attention

Categorizing Types of Attention The first two types (sustained and selective) are needed when you have to focus on one thing at a time. Sustained attention is used when you need to focus on one specific task or activity for a long period of time (playing a video game). Selective attention is used to focus on one activity in the midst of many activities (listening to a friend at a loud party). The other two types of attention (alternating and divided) are needed when a person has to focus on multiple things at once. Alternating attention is used to alternate back and forth between tasks or activities (reading a recipe and preparing a meal). Divided attention is used to complete two or more tasks simultaneously (talking on the phone while surfing the web)

types of attention

Define attention, and distinguish between divided attention and selective attention. Attention is any of the very large set of selective processes in the brain. To deal with the impossibility of handling all inputs at once, the nervous system has evolved mechanisms that are able to restrict processing to a subset of things, places, ideas,or moments in time. Selective attention is the form of attention involved when processing is restricted to a subset of the possible stimuli. An example of divided attention is reading this while continuing to be aware of music playing in the room. Sustained attention is the ability to focus on one specific task for a continuous amount of time without being distracted. Selective attention is the ability to select from many factors or stimuli and to focus on only the one that you want while filtering out other distractions. Alternating attention is the ability to switch your focus back and forth between tasks that require different cognitive demands. Divided attention is the ability to process two or more responses or react to two or more different demands simultaneously. Divided attention is often referred to as multi-tasking. Categorizing Types of Attention The first two types (sustained and selective) are needed when you have to focus on one thing at a time. Sustained attention is used when you need to focus on one specific task or activity for a long period of time (playing a video game). Selective attention is used to focus on one activity in the midst of many activities (listening to a friend at a loud party). The other two types of attention (alternating and divided) are needed when a person has to focus on multiple things at once. Alternating attention is used to alternate back and forth between tasks or activities (reading a recipe and preparing a meal). Divided attention is used to complete two or more tasks simultaneously (talking on the phone while surfing the web)

depth perception

Depth perception is the visual ability to perceive the world in three dimensions, enabling judgements of distance. Depth perception arises from a variety of depth cues, which are typically classified into monocular and binocular cues. Monocular cues can provide depth information when viewing a scene with one eye, and include: - Motion parallax: This effect can be seen clearly when driving in a car. Nearby things pass quickly, while far off objects appear stationary. - Perspective: An example would be standing on a straight road, looking down the road, and seeing the road narrow as it goes off in the distance. - Aerial perspective: Images seem blurrier the farther away they are. - Overlap or interposition: If one object partially blocks the view of another object, it is perceived as being closer. - Texture gradient: The texture of an object can be seen clearly when close-by, but becomes less and less apparent the farther away the object is. Binocular cues provide depth information when viewing a scene with both eyes, and include: - Stereopsis or retinal disparity: By using two images of the same scene obtained from slightly different angles (right and left eyes), the brain can calculate depth in the visual scene providing a major means of depth perception. - Convergence: This is the simultaneous inward movement of both eyes toward each other when viewing an object, stretching the eye muscles and helping in depth/distance perception.

Divided Attention

Divided Attention Divided attention is the ability to process two or more responses or react to two or more different demands simultaneously. It is often referred to as multi-tasking. Basically, dividing your attention between two or more tasks. Examples of divided attention include checking email while listening in a meeting, talking with friends while making dinner, or talking on the phone while getting dressed. Unlike alternating attention, when you are using divided attention, you do not change from one task to another completely different task. Instead, you attempt to perform them at the same time. So you are really splitting your attention, instead of alternating it. Therefore, you are only really focusing part of attention on each task. Although divided attention is thought of as the ability to focus on two or more stimuli or activities at the same time, it is humanly impossible to concentrate on two different tasks simultaneously. Your brain can only process one task at a time. So you are really not "focused" on one task at a time, you are really continuously alternating your attention between tasks. That is why it is so difficult and dangerous to text and drive or talk and drive. You are able to use divided attention successfully because of muscle memory and/or habit. It allows you to perform two or more tasks seemingly simultaneously such as reading music and playing an instrument, talking to a person while typing, or driving your car while listening to the radio. However, you are really not focusing on hand positions when playing the instrument or concentrating on the individual acts of driving. You are able to do the task without conscious effort or actually paying attention.

Change blindness

Explain how the Flicker paradigm helps to examine the phenomenon of "change blindness." During the Flicker paradigm, observers are given a picture memory experiment. First, they see a picture of a scene, then it vanishes for a split second, and then it is replaced by a similar image. The task is to determine what changed between the first and second images. The two images continue to flip back and forth (with a blank screen appearing between them) until the observer spots the change, or time runs out. The results of this task show that observers are slow at detecting changes, especially compared to performance on tasks in which the images flip back and forth without a blank screen appearing between them. This failure to notice a change between two scenes is referred to as "change blindness." Change blindness is a perceptual phenomenon that occurs when a change in a visual stimulus is introduced and the observer does not notice it. For example, observers often fail to notice major differences introduced into an image while it flickers off and on again. Change blindness is a perceptual phenomenon that occurs when a change in a visual stimulus is introduced and the observer does not notice it. For example, observers often fail to notice major differences introduced into an image while it flickers off and on again.[1] People's poor ability to detect changes has been argued to reflect fundamental limitations of human attention. Change blindness has become a highly researched topic and some have argued that it may have important practical implications in areas such as eyewitness testimony and distractions while driving.

Feature-matching theories

Feature-matching theories Feature-matching theories propose that we decompose visual patterns into a set of critical features, which we then try to match against features stored in memory. For example, in memory I have stored the information that the letter "Z" comprises two horizontal lines, one oblique line, and two acute angles, whereas the letter "Y" has one vertical line, two oblique lines, and one acute angle. I have similar stored knowledge about other letters of the alphabet. When I am presented with a letter of the alphabet, the process of recognition involves identifying the types of lines and angles and comparing these to stored information about all letters of the alphabet. If presented with a "Z", as long as I can identify the features then I should recognise it as a "Z", because no other letter of the alphabet shares this combination of features. The best known model of this kind is Oliver Selfridge's Pandemonium. One source of evidence for feature matching comes from Hubel and Wiesel's research, which found that the visual cortex of cats contains neurons that only respond to specific features (e.g. one type of neuron might fire when a vertical line is presented, another type of neuron might fire if a horizontal line moving in a particular direction is shown). Some authors have distinguished between local features and global features. In a paper titled Forest before trees David Navon suggested that "global" features are processed before "local" ones. He showed participants large letter "H"s or "S"s that were made up of smaller letters, either small Hs or small Ss. People were faster to identify the larger letter than the smaller ones, and the response time was the same regardless of whether the smaller letters (the local features) were Hs or Ss. However, when required to identify the smaller letters people responded more quickly when the large letter was of the same type as the smaller letters. One difficulty for feature-matching theory comes from the fact that we are normally able to read slanted handwriting that does not seem to conform to the feature description given above. For example, if I write a letter "L" in a slanted fashion, I cannot match this to a stored description that states that L must have a vertical line. Another difficulty arises from trying to generalise the theory to the natural objects that we encounter in our environment.

Humanistic approach

Humanistic - Behavior is shaped by ideas and experiences. It is subjective. As a reaction to psychodynamics and behaviorism, humanistic psychology evolved in the 1950s. Theorists who dealt with this perspective sought to understand the meanings of human behavior. They advocated that the understanding of human behavior is personal and subjective. Our behavior is the outcome of the link between our ideas and experiences. In his postulates of humanistic psychology, James Bugental says that human beings have a human context and that they are conscious about their behavior in the context of others. He suggested that human beings have choices and responsibilities and that they are able to derive a meaning from behavior and apply creativity to their thoughts. The humanistic perspective of psychology includes counseling and therapy. Self-help is a vital component of this perspective.

Id, Ego and Superego

Id, Ego and Superego The id, ego and superego are the three parts of the psychic apparatus defined in Sigmund Freud's structural model of the psyche; they are functions of the mind. The Id is the aspect of personality that is driven by internal and basic drives and needs. These are typically instinctual, such as hunger, thirst, and the drive for sex, or libido. The id acts in accordance with the pleasure principle, in that it avoids pain and seeks pleasure. Due to the instinctual quality of the id, it is impulsive and often unaware of the implications of actions. It is unconscious, selfish, childish and present from birth. The Superego works in contradiction to the id and it comprises the organized part of the personality structure, mainly but not entirely unconscious, that includes the individual's ego ideals, spiritual goals, and the psychic agency (commonly called "conscience") that criticizes and prohibits one's drives, fantasies, feelings, and actions. The Super-ego can be thought of as a type of conscience that punishes misbehavior with feelings of guilt, controlling our sense of right and wrong. The ego acts according to the reality principle and holds psychic functions such as judgment, tolerance, reality testing, control, planning, defense, synthesis of information, intellectual functioning, and memory, which help us to make sense of our thoughts and the world around us. It also acts as a moderator between the id and superego. When a conflict arises between the two, the Ego employs defense mechanisms.

illusion

Illusion A false perception of actual stimuli involving a misperception of size, shape, or the relationship of one element to another R. L. Gregory Believed that susceptibility to the Müller-Lyer and other such illusions is not innate Believed that the culture in which people live is responsible to some extent for the illusions they perceive Segall and others Tested 1,848 adults and children from 15 different cultures to see if susceptibility to illusions is due to experience Study revealed that experience was a factor Stewart Did a study to see if race offered an explanation for the cultural differences in observing illusions No significant differences were found in susceptibility to the illusions based on race Pedersen and Wheeler Studied Native American responses to the Müller-Lyer illusion among two groups of Navajos The group who lived in rectangular houses and had experienced corners, angles, and edges tended to see the illusion The group tended not to see it because their cultural experience consisted of round houses

Magnetoencephalography MEG

Magnetoencephalography MEG Measures local magnetic feild changes from the surgace of the scalp. Using Super Conducting coils aka (SQUIDS) Advantages of MEG excellent temporal resolution better spatial resolution the EEG's and ERPs better to localizing magnetic filed changes when they're happening. non invasive disadvantages of MEG very expensive to buy and maintain. not many in Philadelphia only about 2. gives direct info about the brains activity but cant give info about subcortical structures.

tertiary emotions

Plutchik also stated that humans experienced not only primary and secondary emotions, but tertiary emotions as well. These basic emotions along with the subsequent secondary and tertiary emotions are mentioned below.

Selective Attention

Selective Attention Explain why selective attention is necessary. Selective attention is necessary because some aspects of the environment are more important and interesting than others. There is too much incoming stimulation at the retina to process everything. Discuss how eye movements are related to selective attention. Eye movement to scan a scene and focus on objects of interest is one mechanism of selective attention. Saccades -> small, rapid eye movements Fixations ->pauses in eye movements that indicate where a person is attending There are approximately 3 fixations per secon Selective attention is the ability to select from the various factors or stimuli that are present and to focus on only the one that you want. Every day, you are constantly exposed to a number of environmental factors or stimuli, but your brain naturally responds by selecting a particular aspect or factor to focus on. Selective attention basically allows you to be able to "select" what you want to pay attention to. You may need to use selective attention when attending a loud party and you are focusing on one person's voice, or if you are trying to study in a noisy room. When employing selective attention you are able to avoid distractions from both external (e.g. noise) and internal (e.g. thoughts) influences. If you are good at selective attention, you are good at ignoring distractions. You are able to maintain a specified level of performance in the presence of distracting stimuli.

Smooth pursuit movements

Smooth pursuit movements are much slower tracking movements of the eyes designed to keep a moving stimulus on the fovea. Such movements are under voluntary control in the sense that the observer can choose whether or not to track a moving stimulus (Figure 20.5). (Saccades can also be voluntary, but are also made unconsciously.) Surprisingly, however, only highly trained observers can make a smooth pursuit movement in the absence of a moving target. Most people who try to move their eyes in a smooth fashion without a moving target simply make a saccade. The smooth pursuit system can be tested by placing a subject inside a rotating cylinder with vertical stripes. (In practice, the subject is more often seated in front of a screen on which a series of horizontally moving vertical bars is presented to conduct this "optokinetic test.") The eyes automatically follow a stripe until they reach the end of their excursion. There is then a quick saccade in the direction opposite to the movement, followed once again by smooth pursuit of a stripe. This alternating slow and fast movement of the eyes in response to such stimuli is called optokinetic nystagmus. Optokinetic nystagmus is a normal reflexive response of the eyes in response to large-scale movements of the visual scene and should not be confused with the pathological nystagmus that can result from certain kinds of brain injury (for example, damage to the vestibular system or the cerebellum; see Chapters 14 and 19).

Social Learning Theory

Social Learning Theory Social learning theory is a psychological perspective that states that all social behavior is learned, reinforced and modeled by the observation of others' actions and the rewards/punishments following those actions. Social learning theory was derived from the work of Albert Bandura, whose initial research analyzed the willingness of children and adults to imitate behaviour observed in others. Bandura proposed that the modelling process involved four main steps: 1. In order for an individual to learn a behaviour that another individual is performing, they must first pay attention to the features of that behaviour. 2. In order to successfully perform the learned behaviour, they must be able to remember, or retain, the steps and features of the behaviour. 3. The individual must possess the motor coordination necessary to accurately or approximately reproduce the behaviour seen. 4. The individual must have the motivation to accurately perform the previous three steps.

synapse

Space between two neurons or between a neuron and an effector. This is where neurotransmitters get released.

computed tomography

Specialized X-ray that gives good anatomical images of the brain. Has bad temporal resolution. Good spatial resolution.-more sensitive than regular X-rays -less expensive than an MRI -good for people with claustrophobia -not affected by movement -good for viewing trauma and neurological injuries. -non invasive Disadvantages of CT scan -involves x rays -bones obscure the images.

The backward inhibition effect

The backward inhibition effect occurs when test subjects are asked to carry out a set of cognitive tasks, switching from each one to the next. When carrying out three or more tasks in a row, subjects tend to spend more time when made to switch to a previous, recently abandoned task than switching to a new one. This is attributed to an inhibition of previous tasks.

The cocktail party effect

The cocktail party effect is the phenomenon of the brain's ability to focus one's auditory attention (an effect of selective attention in the brain) on a particular stimulus while filtering out a range of other stimuli, as when a partygoer can focus on a single conversation in a noisy room.[1][2][3] Listeners have the ability to both segregate different stimuli into different streams, and subsequently decide which streams are most pertinent to them.[4] Thus, it has been proposed that one's sensory memory subconsciously siphons through all stimuli, and when an important word or phrase with high meaning appears, it stands out to the listener.[5] This effect is what allows most people to "tune into" a single voice and "tune out" all others. It may also describe a similar phenomenon that occurs when one may immediately detect words of importance originating from unattended stimuli, for instance hearing one's name in another conversation during a cocktail party.[6][7] Auditory attention in regards to the cocktail party effect primarily occurs in the left hemisphere of the superior temporal gyrus (where the primary auditory cortex); a fronto-parietal network involving the inferior frontal gyrus, superior parietal sulcus, and intraparietal sulcus also accounts for the acts of attention-shifting, speech processing, and attention control.[1] Both the target stream (the more important information being attended to) and competing/interfering streams are processed in the same pathway within the left hemisphere, but fMRI scans shows that target streams are treated with more attention than competing streams. [8] Furthermore, we see that activity in the STG toward the target stream is decreased/interfered with when competing stimuli streams (that typically hold significant value) arise. The "cocktail party effect" - the ability to detect significant stimuli in multitalker situations - has also been labeled the "cocktail party problem" because our ability to selectively attend simultaneously interferes with the effectiveness of attention at a neurological level.[8] The cocktail party effect works best as a binaural effect, which requires hearing with both ears. People with only one functioning ear seem much more distracted by interfering noise than people with two typical ears.[9] The binaural aspect of the cocktail party effect is related to the localization of sound sources. The auditory system is able to localize at least two sound sources and assign the correct characteristics to these sources simultaneously. As soon as the auditory system has localized a sound source, it can extract the signals of this sound source out of a mixture of interfering sound sources.[10] Early work In the early 1950s much of the early attention research can be traced to problems faced by air traffic controllers. At that time, controllers received messages from pilots over loudspeakers in the control tower. Hearing the intermixed voices of many pilots over a single loudspeaker made the controller's task very difficult.[11] The effect was first defined and named "the cocktail party problem" by Colin Cherry in 1953.[12] Cherry conducted attention experiments in which participants listened to two different messages from a single loudspeaker at the same time and tried to separate them; this was later termed a dichotic listening task.[13] (See Broadbent section below for more details). His work reveals that the ability to separate sounds from background noise is affected by many variables, such as the sex of the speaker, the direction from which the sound is coming, the pitch, and the rate of speech.[12] Cherry developed the shadowing task in order to further study how people selectively attend to one message amid other voices and noises. In a shadowing task participants wear a special headset that presents a different message to each ear. The participant is asked to repeat aloud the message (called shadowing) that is heard in a specified ear (called a channel).[13] Cherry found that participants were able to detect their name from the unattended channel, the channel they were not shadowing.[14] Later research using Cherry's shadowing task was done by Neville Moray in 1959. He was able to conclude that almost none of the rejected message is able to penetrate the block set up, except subjectively "important" messages.[14] More recent work Selective attention shows up across all ages. Starting with infancy, babies begin to turn their heads toward a sound that is familiar to them, such as their parents' voices.[15] This shows that infants selectively attend to specific stimuli in their environment. Furthermore, reviews of selective attention indicate that infants favor "baby" talk over speech with an adult tone.[13][15] This preference indicates that infants can recognize physical changes in the tone of speech. However, the accuracy in noticing these physical differences, like tone, amid background noise improves over time.[15] Infants may simply ignore stimuli because something like their name, while familiar, holds no higher meaning to them at such a young age. However, research suggests that the more likely scenario is that infants don't understand that the noise being presented to them amidst distracting noise is their own name, and thus do not respond.[16] The ability to filter out unattended stimuli reaches its prime in young adulthood. In reference to the cocktail party phenomenon, older adults have a harder time than younger adults focusing in on one conversation if competing stimuli, like "subjectively" important messages, make up the background noise.[15] Some examples of messages that catch people's attention include personal names and taboo words. The ability to selectively attend to one's own name has been found in infants as young as 5 months of age and appears to be fully developed by 13 months.[17] Along with multiple experts in the field, Anne Treisman states that people are permanently primed to detect personally significant words, like names, and theorizes that they may require less perceptual information than other words to trigger identification.[18] Another stimulus that reaches some level of semantic processing while in the unattended channel is taboo words.[19] These words often contain sexually explicit material that cause an alert system in people that leads to decreased performance in shadowing tasks.[20] Taboo words do not affect children in selective attention until they develop a strong vocabulary with an understanding of language. Selective attention begins to waiver as we get older. Older adults have longer latency periods in discriminating between conversation streams. This is typically attributed to the fact that general cognitive ability begins to decay with old age (as exemplified with memory, visual perception, higher order functioning, etc.).[1][4] Even more recently, modern neuroscience techniques are being applied to study the cocktail party problem. Some notable examples of researchers doing such work include Edward Chang, Nima Mesgarani, and Charles Schroeder using electrocorticography; Jonathan Simon, Mounya Elhilali, Adrian KC Lee, Shihab Shamma, Barbara Shinn-Cunningham and Jyrki Ahveninen using magnetoencephalography; Jyrki Ahveninen, Edmund Lalor, and Barbara Shinn-Cunningham using electroencephalography; and Jyrki Ahveninen and Lee M. Miller using functional magnetic resonance imaging. Models of attention Not all the information presented to us can be processed. In theory, the selection of what to pay attention to can be random or nonrandom.[21] For example, when driving, drivers are able to focus on the traffic lights rather than on other stimuli present in the scene. In such cases it is mandatory to select which portion of presented stimuli is important. A basic question in psychology is when this selection occurs.[13] This issue has developed into the early versus late selection controversy. The basis for this controversy can be found in the Cherry dichotic listening experiments. Participants were able to notice physical changes, like pitch or change in gender of the speaker, and stimuli, like their own name, in the unattended channel. This brought about the question of whether the meaning, semantics, of the unattended message was processed before selection.[13] In an early selection attention model very little information is processed before selection occurs. In late selection attention models more information, like semantics, is processed before selection occurs.[21] Broadbent Some of the earliest work in exploring mechanisms of early selective attention was performed by Donald Broadbent, who proposed a theory that came to be known as the filter model.[22] This model was established using the dichotic listening task. His research showed that most participants were accurate in recalling information that they actively attended to, but were far less accurate in recalling information that they had not attended to. This led Broadbent to the conclusion that there must be a "filter" mechanism in the brain that could block out information that was not selectively attended to. The filter model was hypothesized to work in the following way: as information enters the brain through sensory organs (in this case, the ears) it is stored in sensory memory, a buffer memory system that hosts an incoming stream of information long enough for us to pay attention to it.[13] Before information is processed further, the filter mechanism allows only attended information to pass through. The selected attention is then passed into working memory, the set of mechanisms that underlies short-term memory and communicates with long-term memory.[13] In this model, auditory information can be selectively attended to on the basis of its physical characteristics, such as location and volume.[22][23][24] Others suggest that information can be attended to on the basis of Gestalt features, including continuity and closure.[25] For Broadbent, this explained the mechanism by which people can choose to attend to only one source of information at a time while excluding others. However, Broadbent's model failed to account for the observation that words of semantic importance, for example the individual's own name, can be instantly attended to despite having been in an unattended channel. Shortly after Broadbent's experiments, Oxford undergraduates Gray and Wedderburn repeated his dichotic listening tasks, altered with monosyllabic words that could form meaningful phrases, except that the words were divided across ears.[26] For example, the words, "Dear, one, Jane," were sometimes presented in sequence to the right ear, while the words, "three, Aunt, six," were presented in a simultaneous, competing sequence to the left ear. Participants were more likely to remember, "Dear Aunt Jane," than to remember the numbers; they were also more likely to remember the words in the phrase order than to remember the numbers in the order they were presented. This finding goes against Broadbent's theory of complete filtration because the filter mechanism would not have time to switch between channels. This suggests that meaning may be processed first. Treisman In a later addition to this existing theory of selective attention, Anne Treisman developed the attenuation model.[27] In this model, information, when processed through a filter mechanism, is not completely blocked out as Broadbent might suggest. Instead, the information is weakened (attenuated), allowing it to pass through all stages of processing at an unconscious level. Treisman also suggested a threshold mechanism whereby some words, on the basis of semantic importance, may grab one's attention from the unattended stream. One's own name, according to Treisman, has a low threshold value (i.e. it has a high level of meaning) and thus is recognized more easily. The same principle applies to words like fire, directing our attention to situations that may immediately require it. The only way this can happen, Treisman argued, is if information was being processed continuously in the unattended stream. Deutsch & Deutsch Diana Deutsch, best known for her work in music perception and auditory illusions, has also made important contributions to models of attention. In order to explain in more detail how words can be attended to on the basis of semantic importance, Deutsch & Deutsch[28] and Norman[29] proposed a model of attention which includes a second selection mechanism based on meaning. In what came to be known as the Deutsch-Norman model, information in the unattended stream is not processed all the way into working memory, as Treisman's model would imply. Instead, information on the unattended stream is passed through a secondary filter after pattern recognition. If the unattended information is recognized and deemed unimportant by the secondary filter, it is prevented from entering working memory. In this way, only immediately important information from the unattended channel can come to awareness. Kahneman Daniel Kahneman also proposed a model of attention, but it differs from previous models in that he describes attention not in terms of selection, but in terms of capacity. For Kahneman, attention is a resource to be distributed among various stimuli,[30] a proposition which has received some support.[7][31][32] This model describes not when attention is focused, but how it is focused. According to Kahneman, attention is generally determined by arousal; a general state of physiological activity. The Yerkes-Dodson law predicts that arousal will be optimal at moderate levels - performance will be poor when one is over- or under-aroused. Of particular relevance, Narayan et al. discovered a sharp decline in the ability to discriminate between auditory stimuli when background noises were too numerous and complex - this is evidence of the negative effect of overarousal on attention.[31] Thus, arousal determines our available capacity for attention. Then, an allocation policy acts to distribute our available attention among a variety of possible activities. Those deemed most important by the allocation policy will have the most attention given to them. The allocation policy is affected by enduring dispositions (automatic influences on attention) and momentary intentions (a conscious decision to attend to something). Momentary intentions requiring a focused direction of attention rely on substantially more attention resources than enduring dispositions.[33] Additionally, there is an ongoing evaluation of the particular demands of certain activities on attention capacity.[30] That is to say, activities that are particularly taxing on attention resources will lower attention capacity and will influence the allocation policy - in this case, if an activity is too draining on capacity, the allocation policy will likely cease directing resources to it and instead focus on less taxing tasks. Kahneman's model explains the cocktail party phenomenon in that momentary intentions might allow one to expressly focus on a particular auditory stimulus, but that enduring dispositions (which can include new events, and perhaps words of particular semantic importance) can capture our attention. It is important to note that Kahneman's model doesn't necessarily contradict selection models, and thus can be used to supplement them. Visual correlates Some research has demonstrated that the cocktail party effect may not be simply an auditory phenomenon, and that relevant effects can be obtained when testing visual information as well. For example, Shapiro et al. were able to demonstrate an "own name effect" with visual tasks, where subjects were able to easily recognize their own names when presented as unattended stimuli.[34] They adopted a position in line with late selection models of attention such as the Treisman or Deutsch-Norman models, suggesting that early selection would not account for such a phenomenon. The mechanisms by which this effect might occur were left unexplained. Contents The cocktail party effect is a term coined by the English cognitive scientist Colin Cherry in 1953, and refers to our ability to focus our attention on a particular person's voice amongst other voices and background noise.

The law of effect

The law of effect is a concept coined by Edward Thorndike, stating that a if a specific behaviour is followed by a positive outcome, the behaviour is more likely to recur.

The law of exercise

The law of exercise is a concept coined by Edward Thorndike, stating that the more frequently a stimulus is connected with a response, the stronger the link between the two will be.

Fergus I.M. Craig and Robert S. Lockhart in 1972,

The levels of processing framework proposed by Fergus I.M. Craig and Robert S. Lockhart in 1972, refers to three levels of encoding any information: visual encoding (concerned with visual stimuli), phonemic encoding (concerned with visual and auditory stimuli) and semantic encoding (concerned with meaning). Shallow processing, (visual and phonemic processing), leads to a fragile memory trace that is susceptible to rapid decay, and deep processing (semantic processing) results in a more durable memory trace.

The neighbourhood activation model

The neighbourhood activation model is a computational theory of spoken word recognition proposed by Luce & Pisoni, which states that auditory stimulus inputs to the brain directly activate similar sounding acoustic-phonetic patterns in memory.

transcranial magnetic stimulation

Transcranial magnetic stimulation is a type of non-invasive brain stimulation, whereby a brief electrical current is generated within the cerebral cortex. This electrical pulse is generated by sending a current through a coil held at a predetermined position on the scalp, and the resulting magnetic field then generates an electrical pulse within the brain, at a depth of up to several centimetres below the skull (1). By targeting and disrupting neural activity at specific locations of the cerebral cortex, TMS has been used to study aspects of cognitive and perceptual activity related to localization of function in cortical areas of the brain. The technique is also used to treat some patients suffering from medication-resistant depression, as well as to establish cortical motor and language maps prior to surgical interventions (2). transcranial magnetic stimulation allows for transient safe disruption of local neuro activity. single pulse activates brain area. repetative pulse is what can create temporary lesions can determine area is necessary of not for activity advantages of TMS non invasive creates temporary lesions useful to treat patients with depression schizophrenia and autism (lack of serotonin)send pulse to activate serotonin. limitations of TMS not sure of side effects/not sure of long term effects can only examine 1 area at a time

Vergence movements

Vergence movements align the fovea of each eye with targets located at different distances from the observer. Unlike other types of eye movements in which the two eyes move in the same direction (conjugate eye movements), vergence movements are disconjugate (or disjunctive); they involve either a convergence or divergence of the lines of sight of each eye to see an object that is nearer or farther away. Convergence is one of the three reflexive visual responses elicited by interest in a near object. The other components of the so-called near reflex triad are accommodation of the lens, which brings the object into focus, and pupillary constriction, which increases the depth of field and sharpens the image on the retina (see Chapter 11).

Vestibulo-ocular movements

Vestibulo-ocular movements stabilize the eyes relative to the external world, thus compensating for head movements. These reflex responses prevent visual images from "slipping" on the surface of the retina as head position varies. The action of vestibulo-ocular movements can be appreciated by fixating an object and moving the head from side to side; the eyes automatically compensate for the head movement by moving the same distance but in the opposite direction, thus keeping the image of the object at more or less the same place on the retina. The vestibular system detects brief, transient changes in head position and produces rapid corrective eye movements (see Chapter 14). Sensory information from the semicircular canals directs the eyes to move in a direction opposite to the head movement. While the vestibular system operates effectively to counteract rapid movements of the head, it is relatively insensitive to slow movements or to persistent rotation of the head. For example, if the vestibulo-ocular reflex is tested with continuous rotation and without visual cues about the movement of the image (i.e.,with eyes closed or in the dark), the compensatory eye movements cease after only about 30 seconds of rotation. However, if the same test is performed with visual cues, eye movements persist. The compensatory eye movements in this case are due to the activation of the smooth pursuit system, which relies not on vestibular information but on visual cues indicating motion of the visual field.

lectroncephalography (EEG) Event Related Potentials (ERP)

lectroncephalography (EEG) Event Related Potentials (ERP) -EEG/ERP record patterns of electric activity -EEG you can tell if the person has coherent patterns of brain activity seizures and sleep stages ERP=break in stimuli in responsie to a stimuli Advantages of EEG and ERP -non invasive -can identify pathological states of the brain via electrical. -Can view a demyelinated brain's activity -can analyze date by averaging data into intervals and average waveforms=powerful technique to bring out a response caused by stimuli. -Used before neuro surgery -useful with children -produces immediate feedback for analysis disadvantages of EEG ERP not much can be determined based on anatomy of the brain. -not 100% as to where the signals come from in response to the stimuli -cannot measure subcortical structures

the information-processing approach

The information processing approach is based on a number of assumptions, including: (1) information made available by the environment is processed by a series of processing systems (e.g. attention, perception, short-term memory); (2) these processing systems transform or alter the information in systematic ways; (3) the aim of research is to specify the processes and structures that underlie cognitive performance; (4) information processing in humans resembles that in computers. Hence the information processing approach characterizes thinking as the environment providing input of data, which is then transformed by our senses. The information can be stored, retrieved and transformed using "mental programs", with the results being behavioral responses. Cognitive psychology has influenced and integrated with many other approaches and areas of study to produce, for example, social learning theory, cognitive neuropsychology and artificial intelligence (AI). Information Processing & Attention When we are selectively attending to one activity, we tend to ignore other stimulation, although our attention can be distracted by something else, like the telephone ringing or someone using our name. Psychologists are interested in what makes us attend to one thing rather than another (selective attention); why we sometimes switch our attention to something that was previously unattended (e.g. Cocktail Party Syndrome), and how many things we can attend to at the same time (attentional capacity). One way of conceptualizing attention is to think of humans as information processors who can only process a limited amount of information at a time without becoming overloaded. Broadbent and others in the 1950's adopted a model of the brain as a limited capacity information processing system, through which external input is transmitted. The Information Processing System information processing approach Information processing models consist of a series of stages, or boxes, which represent stages of processing. Arrows indicate the flow of information from one stage to the next. * Input processes are concerned with the analysis of the stimuli. * Storage processes cover everything that happens to stimuli internally in the brain and can include coding and manipulation of the stimuli. * Output processes are responsible for preparing an appropriate response to a stimulus. Critical Evaluation A number of models of attention within the Information Processing framework have been proposed including: Broadbent's Filter Model (1958), Treisman's Attenuation Model (1964) and Deutsch and Deutsch's Late Selection Model (1963). However, there are a number of evaluative points to bear in mind when studying these models, and the information processing approach in general. These include: 1. The information processing models assume serial processing of stimulus inputs. Serial processing effectively means one process has to be completed before the next starts. Parallel processing assumes some or all processes involved in a cognitive task(s) occur at the same time. There is evidence from dual-task experiments that parallel processing is possible. It is difficult to determine whether a particular task is processed in a serial or parallel fashion as it probably depends (a) on the processes required to solve a task, and (b) the amount of practice on a task. Parallel processing is probably more frequent when someone is highly skilled; for example a skilled typist thinks several letters ahead, a novice focuses on just 1 letter at a time. 2. The analogy between human cognition and computer functioning adopted by the information processing approach is limited. Computers can be regarded as information processing systems insofar as they: (i) combine information presented with stored information to provide solutions to a variety of problems, and (ii) most computers have a central processor of limited capacity and it is usually assumed that capacity limitations affect the human attentional system. BUT - (i) the human brain has the capacity for extensive parallel processing and computers often rely on serial processing; (ii) humans are influenced in their cognitions by a number of conflicting emotional and motivational factors. 3. The evidence for the theories/models of attention which come under the information processing approach is largely based on experiments under controlled, scientific conditions. Most laboratory studies are artificial and could be said to lack ecological validity. In everyday life, cognitive processes are often linked to a goal (e.g. you pay attention in class because you want to pass the examination), whereas in the laboratory the experiments are carried out in isolation form other cognitive and motivational factors. Although these laboratory experiments are easy to interpret, the data may not be applicable to the real world outside the laboratory. More recent ecologically valid approaches to cognition have been proposed (e.g. the Perceptual Cycle, Neisser, 1976). Attention has been studied largely in isolation from other cognitive processes, although clearly it operates as an interdependent system with the related cognitive processes of perception and memory. The more successful we become at examining part of the cognitive system in isolation, the less our data are likely to tell us about cognition in everyday life. 4. The Models proposed by Broadbent and Treisman are 'bottom-up' or 'stimulus driven' models of attention. Although it is agreed that stimulus driven information in cognition is important, what the individual brings to the task in terms of expectations/past experiences are also important. These influences are known as 'top-down' or 'conceptually-driven' processes. For example, read the triangle below: visual illussion Expectation (top-down processing) often over-rides information actually available in the stimulus (bottom-up) which we are, supposedly, attending to. How did you read the text in the triangle above?

selective perception

We attend to a meaningful stimuli and filter out irrelevant or extraneous stimuli; ex. if you're reading a book that you can't seem to put down, you'll block out other noises and people around you and focus on the story in the book.

Aversion therapy

Aversion therapy is a behavioural modification technique, which uses the principles of classical conditioning. An unpleasant stimulus is paired with an unwanted behaviour (such as nail-biting, smoking) in order to create an aversion to it. Examples of the types of unpleasant simuli used are electric shocks and nausia-inducing drugs.

Define change detection

Define change detection, and describe research that supports this concept. Change blindness experiments show that people can miss large changes in scenes if those changes do not markedly alter the meaning of the scene.

Treisman's feature integration theory.

Feature integration theory (FIT) Main article: Feature integration theory The Models proposed by Broadbent and Treisman are 'bottom-up' or 'stimulus driven' models of attention. Although it is agreed that stimulus driven information in cognition is important, what the individual brings to the task in terms of expectations/past experiences are also important. These influences are known as 'top-down' or 'conceptually-driven' processes. For example, read the triangle below: A popular explanation for the different reaction times of feature and conjunction searches is the feature integration theory (FIT), introduced by Treisman and Gelade in 1980. This theory proposes that certain visual features are registered early, automatically, and are coded rapidly in parallel across the visual field using preattentive processes.[19] Experiments show that these features include luminance, colour, orientation, motion direction, and velocity, as well as some simple aspects of form.[20] For example, a red X can be quickly found among any number of black Xs and Os because the red X has the discriminative feature of colour and will "pop out." In contrast, this theory also suggests that in order to integrate two or more visual features belonging to the same object, a later process involving integration of information from different brain areas is needed and is coded serially using focal attention. For example, when locating an orange square among blue squares and orange triangles, neither the colour feature "orange" nor the shape feature "square" is sufficient to locate the search target. Instead, one must integrate information of both colour and shape to locate the target. Evidence that attention and thus later visual processing is needed to integrate two or more features of the same object is shown by the occurrence of illusory conjunctions, or when features do not combine correctly. For example, if a display of a green X and a red O are flashed on a screen so briefly that the later visual process of a serial search with focal attention cannot occur, the observer may report seeing a red X and a green O. The FIT is a dichotomy because of the distinction between its two stages: the preattentive and attentive stages.[21] Preattentive processes are those performed in the first stage of the FIT model, in which the simplest features of the object are being analyzed, such as color, size, and arrangement. The second attentive stage of the model incorporates cross-dimensional processing,[22] and the actual identification of an object is done and information about the target object is put together. This theory has not always been what it is today; there have been disagreements and problems with its proposals that have allowed the theory to be amended and altered over time, and this criticism and revision has allowed it to become more accurate in its description of visual search.[22] There have been disagreements over whether or not there is a clear distinction between feature detection and other searches that use a master map accounting for multiple dimensions in order to search for an object. Some psychologists support the idea that feature integration is completely separate from this type of master map search, whereas many others have decided that feature integration incorporates this use of a master map in order to locate an object in multiple dimensions.[21] The FIT also explains that there is a distinction between the brain's processes that are being used in a parallel versus a focal attention task. Chan and Hayward[21] have conducted multiple experiments supporting this idea by demonstrating the role of dimensions in visual search. While exploring whether or not focal attention can reduce the costs caused by dimension-switching in visual search, they explained that the results collected supported the mechanisms of the feature integration theory in comparison to other search-based approaches. They discovered that single dimensions allow for a much more efficient search regardless of the size of the area being searched, but once more dimensions are added it is much more difficult to efficiently search, and the bigger the area being searched the longer it takes for one to find the target.[21] Discuss Treisman's feature integration theory: The basic principles, methodology, results, and the role of illusory conjunctions. The theory is that there is a preattentive stage of basic feature processing followed by a second, attention-demanding stage...in which one can achieve the correct binding of features to objects. Illusory conjunctions (wrong combinations of features) occur when we don't have enough time to complete the task of binding. In this case, we do the best we can with the information we have. Describe Treisman's feature integration theory. This theory of visual attention states that a limited set of basic features can be processed in parallel preattentively, but that other properties, including the correct binding of features to objects, require attention. What are the two stages of feature integration theory? The two stages of feature integration theory are: 1) The preattentive stage, which refers to the processing of stimuli that occurs before selective attention is deployed to any particular stimulus. 2) The attentive stage, which refers to processing that requires the deployment of attention to a particular stimulus or location.

forgetting

Loss of memory or forgetting, is defined as an inability to retrieve stored information due to its poor encoding, storage, or retrieval. While the process of forgetting is beneficial for maintaining the plasticity of the brain, it is detrimental when useful data or information is lost. There are four main explanations for the loss of stored data. reasons for forgetting: Encoding- Failure in proper encoding of short-term memory leads to their non-conversion into long-term memory, and hence, the inevitable loss Storage Decay- Poor quality and strength of stored memory causes a gradual decay. Retrieval Failure- Inability to successfully retrieve stored long-term memory causes loss of that memory. Interference Theory - Acquisition of new data interferes with similar data that is already stored.

Prefrontal Region

Prefrontal Region Associated with extracting meaning from pictures and sentences; left prefrontal - verbal material; right prefrontal - visual material

Scripts

Scripts event schemas that involve stereotypic sequences of actions

abstraction theories

abstraction theories propose that we have actually abstracted general properties from the instances we have studied

global superiority effect

we tend to process the whole of an object before we process the features, David Navon (1977), eg. if asked to name a local letter, global letter will have an effect on responding, eg. when asked to name a global letter, local letters have no difference

axon

Sends electrical signals away from the neuron Neuron part that sends an action potential(nerve impulse) away from the cell body. Ends of axons that contain vesicles with NTs (neurotransmitter). the long fiber that carries impulses away from the cell body

semantic network

A semantic network, or frame network, is a network that represents semantic relations between concepts. This is often used as a form of knowledge representation. It is a directed or undirected graph consisting of vertices, which represent concepts, and edges, which represent semantic relations between concepts.[1] Typical standardized semantic networks are expressed as semantic triples. A semantic network is used when one has knowledge that is best understood as a set of concepts that are related to one another. Most semantic networks are cognitively based. They also consist of arcs and nodes which can be organized into a taxonomic hierarchy. Semantic networks contributed ideas of spreading activation, inheritance, and nodes as proto-objects.

allostasis

Allostasis is a term referring to an organism's physiological and behavioural reactions to stressors. The concept of allostasis was coined by Sterling and Eyer in 1988, who proposed that the notion of homeostasis (error-correction by feedback) is insufficient in terms of serving as the primary regulatory mechanism in biological systems. In the allostasis model, a predictive element is added, in which learned aspects of a situation are combined with sense data, ensuring a preemptive, and thereby more efficient response to stressors.

the binding problem

Binding problem: the challenge of tying different attributes of visual stimuli (e.g., color, orientation, motion), which are handled by different brain circuits, to the appropriate object so that we perceive a unified object (e.g., red, vertical, moving right). It is vital to perception, because if one does not have the time to look carefully at a stimulus, he or she is likely to make wrong combinations of the features from two or more different objects. What is the binding problem? The challenge of tying different attributes of visual stimuli, which are handled by different brain circuits, to the appropriate object so that we perceive a unified object.

Controlled Processing

Controlled Processing Performance/thought concentration o -ex. First learning to drive (gas vs. break, mirrors)- takes more effort and we are hyper aware o perform more slowly, more effort o concentrated behavior o single task oriented o put into controlled when: • difficult or novel task (new) • high motivation • Individual Difference: Need for cognition

Degree of category membership

Degree of category membership different instances are judged to be members of a category to different degrees, with the more typical members of a category having an advantage in processing

Dichotic listening

Dichotic listening Dichotic listening is a behavioural technique for studying brain asymmetry in auditory processing. In a dichotic listening experiment, the subject is presented with different sounds to the right and the left ear simultaneously. This means that the subject receives more auditory stimulus than she is able to analyze consciously. The interesting question, then, is what part of the input will be selected for conscious analysis. Dichotic Listening is a psychological test commonly used to investigate selective attention within the auditory system and is a subtopic of cognitive psychology and neuroscience. Dichotic Fused Words Test (DFWT) The "Dichotic Fused Words Test" (DFWT) is a modified version of the basic dichotic listening test. It was originally explored by Johnson et al. (1977)[8] but in the early 80's Wexler and Hawles (1983)[9] modified this original test to ascertain more accurate data pertaining to hemispheric specialization of language function. In the DFWT, each participant listens to pairs of monosyllabic rhyming consonant-vowel-consonant (CVC) words. Each word varies in the initial consonant. The significant difference in this test is "the stimuli are constructed and aligned in such a way that partial interaural fusion occurs: subjects generally experience and report only one stimulus per trial." [10] According to Zatorre (1989), some major advantages of this method include "minimizing attentional factors, since the percept is unitary and localized to the midline" and "stimulus dominance effects may be explicitly calculated, and their influence on ear asymmetries assessed and eliminated."[10] Wexler and Hawles study obtained a high test-retest reliability (r=0.85).[9] High test-retest reliability is good, because it proves that the data collected from the study is consistent.Testing with Emotional Factors An emotional version of the dichotic listening task was developed. In this version individuals listen to the same word in each ear but they hear it in either a surprised, happy, sad, angry, or neutral tone. Participants are then asked to press a button indicating what tone they heard. Usually dichotic listening tests show a right-ear advantage for speech sounds. Right-ear/left-hemisphere advantage is expected, because of evidence from Broca's area and Wernicke's area, which are both located in the left hemisphere. In contrast, the left ear (and therefore the right hemisphere) is often better at processing nonlinguistic material.[11] The data from the emotional dichotic listening task is consistent with the other studies, because participants tend to have more correct responses to their left ear than to the right.[12] It is important to note that the emotional dichotic listening task is seemingly harder for the participants than the phonemic dichotic listening task. Meaning more incorrect responses were submitted by individuals. Manipulation of Voice Onset Time (VOT) The manipulation of voice onset time (VOT) during dichotic listening tests have given many insights regarding brain function.[13] To date, the most common design is the utilisation of four VOT conditions: short-long pairs (SL), where a Consonant-Vowel (CV) syllable with a short VOT is presented to the left ear and a CV syllable with a long VOT is presented to the right ear, as well as long-short (LS), short-short (SS) and long-long (LL) pairs. In 2006, Rimol, Eichele, and Hugdahl [14] first reported that in healthy adults SL pairs elicit the largest REA while, in fact, LS pairs elicit a significant left ear advantage (LEA). A study of children 5-8 years old has shown a developmental trajectory whereby long VOTs gradually start to dominate over short VOTs when LS pairs are being presented under dichotic conditions.[15] Converging evidence from studies of attentional modulation of the VOT effect shows that around age 9 children lack the adult-like cognitive flexibility required to exert top-down control over stimulus-driven bottom-up processes.[16][17] Arciuli et al.(2010) further demonstrated that this kind of cognitive flexibility is a predictor of proficiency with complex tasks such as reading.Neuroscience Dichotic listening tests can also be used as lateralized speech assessment task. Neuropsychologists have used this test to explore the role of singular neuroanatomical structures in speech perception and language asymmetry. For example, Hugdahl et al. (2003), investigated dichotic listening performance and frontal lobe function[18] in left and right lesioned frontal lobe nonaphasiac patients compared to healthy controls. In the study, all groups were exposed to 36 dichotic trials with pairs of CV syllables and each patient was asked to state which syllable he or she heard best. As expected, the right lesioned patients showed a right ear advantage like the healthy control group but the left hemisphere lesioned patients displayed impairment when compared to both the right lesioned patients and control group. From this study, researchers concluded "dichotic listening as into a neuronal circuitry which also involves the frontal lobes, and that this may be a critical aspect of speech perception." [18] Similarly, Westerhausen and Hugdahl (2008) [19] analyzed the role of the corpus callosum in dichotic listening and speech perception. After reviewing many studies, it was concluded that "...dichotic listening should be considered a test of functional inter-hemispheric interaction and connectivity, besides being a test of lateralized temporal lobe language function" and "the corpus callosum is critically involved in the top-down attentional control of dichotic listening performance, thus having a critical role in auditory laterality." [19] Language Processing Dichotic listening can also be used to test the hemispheric asymmetry of language processing. In the early 60s, Doreen Kimura reported that dichotic verbal stimuli (specifically spoken numerals) presented to a participant produced a right ear advantage (REA).[20] She attributed the right-ear advantage "to the localization of speech and language processing in the so-called dominant left hemisphere of the cerebral cortex."[1]:115 According to her study, this phenomenon was related to the structure of the auditory nerves and the left-sided dominance for language processing.[21] It is important to note that REA doesn't apply to non-speech sounds. In "Hemispheric Specialization for Speech Perception," by Studdert-Kennedy and Shankweiler (1970)[2] examine dichotic listening of CVC syllable pairs. The six stop consonants (b, d, g, p, t, k) are paired with the six vowels and a variation in the initial and final consonants are analyzed. REA is the strongest when the sound of the initial and final consonants differ and it is the weakest when solely the vowel is changed. Asbjornsen and Bryden (1996) state that "many researchers have chosen to use CV syllable pairs, usually consisting of the six stop consonants paired with the vowel \a\. Over the years, a large amount of data has been generated using such material." [22] Selective Attention In selective attention experiments, the participants may be asked to repeat aloud the content of the message they are listening to. This task is known as shadowing. As Colin Cherry (1953)[23] found, people do not recall the shadowed message well, suggesting that most of the processing necessary to shadow the attended to message occurs in working memory and is not preserved in the long-term store. Performance on the unattended message is worse. Participants are generally able to report almost nothing about the content of the unattended message. In fact, a change from English to German in the unattended channel frequently goes unnoticed. However, participants are able to report that the unattended message is speech rather than non-verbal content. In addition to this, if the content of the unattended message contains certain information, such as the listener's name, then the unattended message is more likely to be noticed and remembered.[24] A demonstration of this was done by Conway, Cowen, and Bunting (2001) in which they had subjects shadow words in one ear while ignoring words in the other ear. At some point, the subject's name was spoken in the ignored ear, and the question was whether the subject would report hearing their name. Subjects with a high working memory (WM) span were more capable of blocking out the distracting information.[25] Also if the message contains sexual words then people usually notice them immediately.[26] This suggests that the unattended information is also undergoing analysis and keywords can divert attention to it. Gender Differences Some data gathered from dichotic listening test experiments suggests that there is possibly a small-population sex difference in perceptual and auditory asymmetries and language laterality. According to Voyer (2011),[27] "Dichotic listening tasks produced homogenous effect sizes regardless of task type (verbal, non-verbal), reflecting a significant sex difference in the magnitude of laterality effects, with men obtaining larger laterality effects than women."[27]:245-246 However, the authors discuss numerous limiting factors ranging from publication bias to small effect size. Furthermore, as discussed in "Attention, reliability, and validity of perceptual asymmetries in the fused dichotic words test,"[28] women reported more "intrusions" or words presented to the uncued ear than men when presented with exogenous cues in the Fused Dichotic Word Task which suggests two possibilities: 1) Women experience more difficulty paying attention to the cued word than men and/or 2) regardless of the cue, women spread their attention evenly as opposed to men who may possibly focus in more intently on exogenous cues.[27] Effect of Schizophrenia on Dichotic Listening A study conducted involving the dichotic listening test, with emphasis on subtypes of schizophrenia (particularly paranoid and undifferentiated), demonstrated that paranoid schizophrenics have the largest left hemisphere advantage - with undifferentiated schizophrenics (where psychotic symptoms are present but the criteria for paranoid, disorganized, or catatonic types have not been met) having the smallest.[29] The application of the dichotic listening test helped to further the beliefs that preserved left hemisphere processing is a product of paranoid schizophrenia, and in contrast, that the left hemisphere's lack of activity is a symptom of undifferentiated schizophrenia. In 1994, M.F. Green and colleagues tried to relate "the functional integration of the left hemisphere in hallucinating and nonhallucinating psychotic patients" using a dichotic listening study. The study showed that auditory hallucinations are connected to a malfunction in the left hemisphere of the brain.[30]

template matching (bottom-up theory)

Template matching TheCat.png What do you read above? One way for people to recognize objects in their environment would be for them to compare their representations of those objects with templates stored in memory. For example, if I can achieve a match between the large red object I see in the street and my stored representation of a London bus, then I recognize a London bus. However, one difficulty for this theory is illustrated in the figure to the right. Here, we have no problem differentiating the middle letters in each word (H and A), even though they are identical. A second problem is that we continue to recognize most objects regardless of what perspective we see them from (e.g. from the front, side, back, bottom, top, etc.). This would suggest we have a nearly infinite store of templates, which hardly seems credible.

secondary emotions

The first secondary emotion is "cheerfulness". This comprises a myriad of tertiary emotions like: amusement ecstasy bliss euphoria elation amusement jubilation delight One secondary emotion in this category is "nervousness". The various tertiary emotions relating to this sub-category are: anxiety apprehension distress dread tenseness uneasiness worry Another secondary emotion in this category is "horror". There are various tertiary emotions in this sub-category. The list of emotions is as follows: alarm fright horror hysteria mortification panic shock terror

The functional equivalence hypothesis

The functional equivalence hypothesis This hypothesis states that there is a perceptual overlap between emotion expressions and certain trait markers, which then influences emotion communication. Specifically, Hess and colleagues propose the notion that some aspects of facial expressive behavior and morphological cues to dominance and affiliation are equivalent in their effects on emotional attributions. The hypothesis has been tested with regard to gender related differences in morphology. Specifically, men's faces are generally perceived as more dominant, whereas women's faces are perceived as more affiliative. Mediation analyses confirmed that beliefs about the emotionality of men and women are mediated by these perceptions of dominance and affiliation (Hess, et al., 2005). In turn, judgments of emotional facial expressions by men and women are consistent with these perceptions (Hess, et al., 1997). A recent study further suggests that social role based stereotypes and facial morphology interact to create beliefs about a person's likely emotional reactions (Hess, et al., in press). This line of research has also shown that facial expressions of anger and happiness entrain perceptions of dominance and affiliation respectively, and perceptually overlap with the morphological markers for dominance and affiliation (Hess, et al., 2009a). Specifically, in a double oddball design, an increase in reaction time is found when, from a perceptual perspective, the two types of faces belong to the same category (Campanella, et al., 2002). In Hess, et al. (2009), participants had to identify neutral faces that were embedded in a series of either angry or happy faces. They were significantly slower to identify highly dominant versus affiliative faces when the faces were embedded in the angry series. The converse was the case for affiliative faces embedded in the happy series. This supports the notion that anger and dominance on one hand, and happiness and affiliation on the other share perceptual markers. Further, angry expressions on dominant male faces are perceived as more threatening than the same expressions on more affiliative female faces and conversely, smiles on more affiliative female faces are perceived as more appetitive than smiles perceived on more dominant male faces (Hess, Sabourin, et al., 2007). Hess, U., Thibault, P., Adams, R. B., Jr. & Kleck, R. E. (in press). The influence of gender, social roles and facial appearance on perceived emotionality. European Journal of Social Psychology. Hess, U. Adams, R. B. Jr., Grammer, K., & Kleck, R. E. (2009). If it frowns it must be a man: Emotion expression influences sex labeling. Journal of Vision. 9(12), Article 19, 1-8. Hess, U., Adams, Jr, R.B., Kleck, R.E. (2009). The face is not an empty canvas: How facial expressions interact with facial appearance. Philosophical Transactions of the Royal Society London B, 364, 497-3504. Hess, U., Adams, J. B., Jr., & Kleck, R. E. (2009). The categorical perception of emotions and traits. Social Cognition, 27, 319-325. Hess, U. Adams, R. B. Jr., Kleck, R. E. (2007). Looking at you or looking elsewhere: The influence of head orientation on the signal value of emotional facial expressions. Motivation and Emotion, 31, 137-144. Hess, U., Sabourin, G., Kleck, R. E. (2007). Postauricular and eye-blink startle responses to facial expressions. Psychophysiology, 44, 431-435. Hess, U., Adams, R. B. Jr., & Kleck, R.E. (2005). Who may frown and who should smile? Dominance, affiliation, and the display of happiness and anger. Cognition & Emotion, 19, 515-536. Hess, U., Adams, R. B. Jr. & Kleck, R. E. (2004). Facial appearance, gender, and emotion expression. Emotion, 4, 378-388.

The self-reference effect

The self-reference effect is a tendency for people to encode information differently depending on the level on which the self is implicated in the information. When people are asked to remember information when it is related in some way to the self, the recall rate can be improved.

The testing effect

The testing effect is the finding that long-term memory is increased when some of the learning period is devoted to retrieving the to-be-remembered information through testing with proper feedback. The effect is also sometimes referred to as retrieval practice, practice testing, or test-enhanced learning.

Wernicke's area

Wernicke's area is associated with other aspects of language, and is named after the German physician Carl Wernicke. In 1864, Wernicke described a patient who was able to speak, but unable to comprehend language. The patient was found to have a lesion in the posterior region of the temporal lobe (2).

*Prototype Model

*Prototype Model Within each category, some members are more representative than others.

regions of brain involved in mental imagery

A mental image or mental picture is the representation in a person's mind of the physical world outside that person.[1] It is an experience that, on most occasions, significantly resembles the experience of perceiving some object, event, or scene, but occurs when the relevant object, event, or scene is not actually present to the senses.[2][3][4][5] There are sometimes episodes, particularly on falling asleep (hypnagogic imagery) and waking up (hypnopompic), when the mental imagery, being of a rapid, phantasmagoric and involuntary character, defies perception, presenting a kaleidoscopic field, in which no distinct object can be discerned.[6] Mental imagery can sometimes produce the same effects as would be produced by the behavior or experience imagined.[7] The nature of these experiences, what makes them possible, and their function (if any) have long been subjects of research and controversy[further explanation needed] in philosophy, psychology, cognitive science, and, more recently, neuroscience. As contemporary researchers[Like whom?] use the expression, mental images or imagery can comprise information from any source of sensory input; one may experience auditory images,[8] olfactory images,[9] and so forth. However, the majority of philosophical and scientific investigations of the topic focus upon visual mental imagery. It has sometimes been assumed[by whom?] that, like humans, some types of animals are capable of experiencing mental images.[10] Due to the fundamentally introspective nature of the phenomenon, there is little to no evidence either for or against this view. Philosophers such as George Berkeley and David Hume, and early experimental psychologists such as Wilhelm Wundt and William James, understood ideas in general to be mental images. Today it is very widely believed[by whom?] that much imagery functions as mental representations (or mental models), playing an important role in memory and thinking.[11][12][13][14] William Brant (2013, p. 12) traces the scientific use of the phrase "mental images" back to John Tyndall's 1870 speech called the "Scientific Use of the Imagination". Some have gone so far as to suggest that images are best understood to be, by definition, a form of inner, mental or neural representation;[15][16] in the case of hypnagogic and hypnapompic imagery, it is not representational at all. Others reject the view that the image experience may be identical with (or directly caused by) any such representation in the mind or the brain,[17][18][19][20][21][22] but do not take account of the non-representational forms of imagery. In 2010, IBM applied for a patent on a method to extract mental images of human faces from the human brain. It uses a feedback loop based on brain measurements of the fusiform face area in the brain that activates proportionate with degree of facial recognition.[23] It was issued in 2015.[24]

mental imagery vs. perception brain regions

A mental image or mental picture is the representation in a person's mind of the physical world outside that person.[1] It is an experience that, on most occasions, significantly resembles the experience of perceiving some object, event, or scene, but occurs when the relevant object, event, or scene is not actually present to the senses.[2][3][4][5] There are sometimes episodes, particularly on falling asleep (hypnagogic imagery) and waking up (hypnopompic), when the mental imagery, being of a rapid, phantasmagoric and involuntary character, defies perception, presenting a kaleidoscopic field, in which no distinct object can be discerned.[6] Mental imagery can sometimes produce the same effects as would be produced by the behavior or experience imagined.[7] The nature of these experiences, what makes them possible, and their function (if any) have long been subjects of research and controversy[further explanation needed] in philosophy, psychology, cognitive science, and, more recently, neuroscience. As contemporary researchers[Like whom?] use the expression, mental images or imagery can comprise information from any source of sensory input; one may experience auditory images,[8] olfactory images,[9] and so forth. However, the majority of philosophical and scientific investigations of the topic focus upon visual mental imagery. It has sometimes been assumed[by whom?] that, like humans, some types of animals are capable of experiencing mental images.[10] Due to the fundamentally introspective nature of the phenomenon, there is little to no evidence either for or against this view. Philosophers such as George Berkeley and David Hume, and early experimental psychologists such as Wilhelm Wundt and William James, understood ideas in general to be mental images. Today it is very widely believed[by whom?] that much imagery functions as mental representations (or mental models), playing an important role in memory and thinking.[11][12][13][14] William Brant (2013, p. 12) traces the scientific use of the phrase "mental images" back to John Tyndall's 1870 speech called the "Scientific Use of the Imagination". Some have gone so far as to suggest that images are best understood to be, by definition, a form of inner, mental or neural representation;[15][16] in the case of hypnagogic and hypnapompic imagery, it is not representational at all. Others reject the view that the image experience may be identical with (or directly caused by) any such representation in the mind or the brain,[17][18][19][20][21][22] but do not take account of the non-representational forms of imagery. In 2010, IBM applied for a patent on a method to extract mental images of human faces from the human brain. It uses a feedback loop based on brain measurements of the fusiform face area in the brain that activates proportionate with degree of facial recognition.[23] It was issued in 2015.[24] What are some differences between visual imagery and perception? -Perception is more vivid than mental imagery. -Perception occurs automatically, but imagery needs effort. -Perception is stable. Mental imagery is more fragile & can vanish without continued effort. -There is more brain activity in perception than with mental images. What is it necessary to use the subtraction method when analyzing mental imaging data? image condition-abstract baseline=brain's response to generating imagery electrical activity when reading & making mental image - electrical activity when reading =electrical activity when making mental image

Event Concepts

Event Concepts Events have conceptual structure as well as objects

Looking glass self

Looking glass self is a term coined by the sociologist Charles Horton Cooley, and refers to his proposal that the individual's perception of himself is based on how he believes that others perceive him. According to Cooley, this process has three steps. First, we imagine how we appear to another person. Second, we imagine which judgements are made based on that appearance. Lastly, we imagine how the person feels about us, based on the judgements made.

Map distortions

Map distortions People resort to abstract facts about relative locations of large physical bodies to make judgments about smaller location

motion parallax

Motion Parallax is a visual depth cue, where objects moving close-by are perceived as moving faster than objects which are further away.

Ponzo Illusion study

Ponzo Illusion study Participants constructed the Ponzo illusion in their mind; participated rated as high in imagery ability, reported as strong an illusion for their mental images as for the actual stimulus

Reaction time slope

Reaction time slope It is also possible to measure the role of attention within visual search experiments by calculating the slope of reaction time over the number of distractors present.[12] Generally, when high levels of attention are required when looking at a complex array of stimuli (conjunction search), the slope increases as the reaction times increase. For simple visual search tasks (feature search), the slope decreases due to reaction times being fast and requiring less attention.[13]

Relation

Relation the element that organizes the arguments of a propositional representation, i.e. verbs/adjectives

Sepanski and Li

Sepanski and Li bilingual memory for meaning is different for primary language than for secondary language

Shephard & Metzler (mental rotation)

Shephard and Metzler 1971; Among the first to study the functional properties of mental images; mental process analogous to physical action. The greater the angle of disparity between the two objects, the longer the participants took to complete the rotation Mental rotation is the ability to rotate mental representations of two-dimensional and three-dimensional objects as it is related to the visual representation of such rotation within the human mind.[1] Mental rotation, as a function of visual representation in the human brain, has been associated with the right cerebral hemisphere. There is a relationship between similar areas of the brain associated with perception and mental rotation. There could also be a relationship between the cognitive rate of spatial processing, general intelligence and mental rotation.[2][3][4] Mental rotation can be described as the brain moving objects in order to help understand what they are and where they belong. Mental rotation has been studied to try to figure out how the mind recognizes objects in their environment. Researchers generally call such objects stimuli. Mental rotation is one cognitive function for the person to figure out what the altered object is. Mental rotation can be separated into the following cognitive stages:[2] Create a mental image of an object from all directions (imagining where it continues straight vs. turns) Rotate the object mentally until a comparison can be made (orientating the stimulus to other figure) Make the comparison Decide if the objects are the same or not Report the decision (reaction time is recorded when level pulled or button pushed) Assessment In a mental rotation test, the participant compares two 3D objects (or letters), often rotated in some axis, and states if they are the same image or if they are mirror images (enantiomorphs).[1] Commonly, the test will have pairs of images each rotated a specific number of degrees (e.g. 0°, 60°, 120° or 180°). A set number of pairs will be split between being the same image rotated, while others are mirrored. The researcher judges the participant on how accurately and rapidly they can distinguish between the mirrored and non-mirrored pairs.[5] Notable research Shepard and Metzler (1971) Roger Shepard and Jacqueline Metzler (1971) were some of the first to research the phenomenon.[6] Their experiment specifically tested mental rotation on three-dimensional objects. Each subject was presented with multiple pairs of three-dimensional, asymmetrical lined or cubed objects. The experiment was designed to measure how long it would take each subject to determine whether the pair of objects were indeed the same object or two different objects. Their research showed that the reaction time for participants to decide if the pair of items matched or not was linearly proportional to the angle of rotation from the original position. That is, the more an object has been rotated from the original, the longer it takes an individual to determine if the two images are of the same object or enantiomorphs.[7] Vandenburg and Kuse (1978) Main article: Mental Rotations Test In 1978, Steven G. Vandenberg and Allan R. Kuse developed a test to assess mental rotation abilities that was based on Shepard and Metzler's (1971) original study. The Mental Rotations Test was constructed using India ink drawings. Each stimulus was a two-dimensional image of a three-dimensional object drawn by a computer. The image was then displayed on an oscilloscope. Each image was then shown at different orientations rotated around the vertical axis. Following the basic ideas of Shepard and Metzler's experiment, this study found a significant difference in the mental rotation scores between men and women, with men performing better. Correlations with other measures showed strong association with tests of spatial visualization and no association with verbal ability.[8][9] Neural activity In 1999, a study was conducted to find out which part of the brain is activated during mental rotation. Seven volunteers (four males and three females) between the ages of twenty-nine to sixty-six participated in this experiment. For the study, the subjects were shown eight characters 4 times each (twice in normal orientation and twice reversed) and the subjects had to decide if the character was in its normal configuration or if it was the mirror image. During this task, a PET scan was performed and revealed activation in the right posterior parietal lobe.[10] Functional magnetic resonance imaging (fMRI) studies of brain activation during mental rotation reveal consistent increased activation of the parietal lobe, specifically the inter-parietal sulcus, that is dependent on the difficulty of the task. In general, the larger the angle of rotation, the more brain activity associated with the task. This increased brain activation is accompanied by longer times to complete the rotation task and higher error rates. Researchers have argued that the increased brain activation, increased time, and increased error rates indicate that task difficulty is proportional to the angle of rotation.[11][12] Rotation in depth 90 degrees Rotation in the picture plane 90 degrees Color Physical objects that people imagine rotating in everyday life have many properties, such as textures, shapes, and colors. A study at the University of California Santa Barbara was conducted to specifically test the extent to which visual information, such as color, is represented during mental rotation. This study used several methods such as reaction time studies, verbal protocol analysis, and eye tracking. In the initial reaction time experiments, those with poor rotational ability were affected by the colors of the image, whereas those with good rotational ability were not. Overall, those with poor ability were faster and more accurate identifying images that were consistently colored. The verbal protocol analysis showed that the subjects with low spatial ability mentioned color in their mental rotation tasks more often than participants with high spatial ability. One thing that can be shown through this experiment is that those with higher rotational ability will be less likely to represent color in their mental rotation. Poor rotators will be more likely to represent color in their mental rotation using piecemeal strategies (Khooshabeh & Hegarty, 2008). Effect on athleticism and artistic ability Research on how athleticism and artistic ability affect mental rotation has also been done. Pietsch, S., & Jansen, P. (2012) showed that people who were athletes or musicians had faster reaction times than people who were not. They tested this by splitting people from the age of 18 and higher into three groups. Group 1 was students who were studying math, sports students and education students. It was found that through the mental rotation test students who were focused on sports did much better than those who were math or education majors. Also it was found that the male athletes in the experiment were faster than females, but male and female musicians showed no significant difference in reaction time. Moreau, D., Clerc, et al. (2012) also investigated if athletes were more spatially aware than non-athletes. This experiment took undergraduate college students and tested them with the mental rotation test before any sport training, and then again afterward. The participants were trained in two different sports to see if this would help their spatial awareness. It was found that the participants did better on the mental rotation test after they had trained in the sports, than they did before the training. There are ways to train your spatial awareness. This experiment brought to the research that if people could find ways to train their mental rotation skills they could perform better in high context activities with greater ease. A study investigated the effect of mental rotation on postural stability. Participants performed a MR (mental rotation) task involving either foot stimuli, hand stimuli, or non-body stimuli (a car) and then had to balance on one foot. The results suggested that MR tasks involving foot stimuli were more effective at improving balance than hand or car stimuli, even after 60 minutes.[13] Researchers studied the difference in mental rotation ability between gymnasts, handball, and soccer players with both in-depth and in-plane rotations. Results suggested that athletes were better at performing mental rotation tasks that were more closely related to their sport of expertise.[14] There is a correlation in mental rotation and motor ability in children, and this connection is especially strong in boys age 7-8. Children were known for having very connected motor and cognitive processes, and the study showed that this overlap is influenced by motor ability.[15] A mental rotation test (MRT) was carried out on gymnasts, orienteers, runners, and non athletes. Results showed that non athletes were greatly outperformed by gymnasts and orienteers, but not runners. Gymnasts (egocentric athletes) did not outperform orienteers (allocentric athletes).[16] Gender Different studies have shown that there is a difference between male and female in mental rotation tasks. In order to explain this difference, we can look at the brain activation during a mental rotation task. In 2012, a study[17] have been done on people that graduated in sciences or in liberal arts. Males and females were asked to execute a mental rotation task, and their brain activity was recorded with an fMRI. The researchers found a difference of brain activation: males present a stronger activity in the area of the brain used in a mental rotation task. A study from 2008[18] showed that this difference occurs early during development. The experiment was done on 3- to 4-month-old infants using a 2D mental rotation task. They used a preference apparatus that consists of observing during how much time the infant is looking at the stimulus. They started by familiarizing the participants with the number "1" and its rotations. Then they showed them a picture of a "1" rotated and its mirror image. The study showed that males are more interested by the mirror image. Females are equally interested by the "1" rotated and its mirror image. That means that males and females process mental rotation differently. Another study[19] from 2015 was focused on women and their abilities in a mental rotation task and in an emotion recognition task. In this experiment, they induced a feeling or a situation in which women feel more powerful or less powerful. They were able to conclude that women in a situation of power are better in a mental rotation task (but less performant in an emotion recognition task) than other women. Studying differences between male and female brains can have interesting applications. For example, it could help in the understanding of the autism spectrum disorders. One of the theories concerning autism is the EMB (extreme male brain). This theory considers that autist have an "extreme male brain". In a study[20] from 2015, researchers confirmed that there is a difference between male and female in mental rotation task (by studying people without autism): Males are more successful. Then they highlighted the fact that autists do not have this "male performance" in a mental rotation task. They conclude their study by "autistic people do not have an extreme version of a male cognitive profile as proposed by the EMB theory".[20] Current research directions Further information: Abstraction There may be relationships between competent bodily movement and the speed with which individuals can perform mental rotation. Researchers found children who trained with mental rotation tasks had improved strategy skills after practicing.[21] Follow-ups studies will compare the differences in the brain among the attempts to discover effects on other tasks and the brain. People use many different strategies to complete tasks; psychologists will study participants who use specific cognitive skills to compare competency and reaction times.[22] Others will continue to examine the differences in competency of mental rotation based on the objects being rotated.[23] Participants' identification with the object could hinder or help their mental rotation abilities across gender and ages to support the earlier claim that males have faster reaction times.[17][24][25] Psychologists will continue to test similarities between mental rotation and physical rotation, examining the difference in reaction times and relevance to environmental implications.[26]

auditory sensory store

auditory sensory store echoic memory; the perceptual regions of the cortex hold a brief representation of sensory information for further processing

3

# of sodium ions

spotlight theory of attention

Attention can be moved from spot to spot in a manner similar to that of a spotlight beam.

Catharsis

Catharsis is a term coined by Joseph Breuer, referring to the hypothesized therapeutic effect of releasing tension created by psychological distress, through expression and activity.

conjunction search

Conjunction Search Conjunction search (also known as inefficient or serial search)[4] is a visual search process that focuses on identifying a previously requested target surrounded by distractors possessing one or more common visual features with the target itself.[8] An example of a conjunction search task is having a person identify a red X (target) amongst distractors composed of black Xs (same shape) and red Os (same color).[8] Unlike feature search, conjunction search involves distractors (or groups of distractors) that may differ from each other but exhibit at least one common feature with the target.[8] The efficiency of conjunction search in regards to reaction time(RT) and accuracy is dependent on the distractor-ratio[8] and the number of distractors present.[5] As the distractors represent the differing individual features of the target more equally amongst themselves(distractor-ratio effect), reaction time(RT) increases and accuracy decreases.[8] As the number of distractors present increases, the reaction time(RT) increases and the accuracy decreases.[4] However, with practice the original reaction time(RT) restraints of conjunction search tend to show improvement.[9] In the early stages of processing, conjunction search utilizes bottom-up processes to identify pre-specified features amongst the stimuli.[5] These processes are then overtaken by a more serial process of consciously evaluating the indicated features of the stimuli[5] in order to properly allocate one's focal spatial attention towards the stimulus that most accurately represents the target.[10] In many cases, top-down processing affects conjunction search by eliminating stimuli that are incongruent with one's previous knowledge of the target-description, which in the end allows for more efficient identification of the target.[6][7] An example of the effect of top-down processes on a conjunction search task is when searching for a red 'K' among red 'Cs' and black 'Ks', individuals ignore the black letters and focus on the remaining red letters in order to decrease the set size of possible targets and, therefore, more efficiently identify their target.[11] Why is conjunction search less efficient than feature search? Conjunction search is less efficient because it requires the presence of two ore more attributes whereas feature search requires the presence of one attribute. How are illusory conjunctions a by-product of conjunction search? An illusory conjunction is an erroneous combination of two features in a visual scene. For instance, seeing a red X when the display contains red letters and Xes, but no red Xes. This error can occur during a recognition task that involves conjunction search, when the observer tries to report which objects were present in a display of items. The observer confuses attributes of one object with attributes of another.

Crystallized and fluid intelligence

Crystallized and fluid intelligence are terms coined by Raymond Catell in 1971. Both are factors of general intelligence, and are measured by sections of IQ tests. Crystalized intelligence refers to the knowledge and skills an individual has accumulated during the course of their lifetime. Fluid intelligence refers to the ability to analyze information and understand relationships between stimuli, independent of previous experience.

Deep Processing

Deep Processing - This involves 3. Semantic processing, which happens when we encode the meaning of a word and relate it to similar words with similar meaning. Deep processing involves elaboration rehearsal which involves a more meaningful analysis (e.g. images, thinking, associations etc.) of information and leads to better recall. For example, giving words a meaning or linking them with previous knowledge.

Embodied Cognition

Embodied Cognition emphasizes the contribution of motor action and how it connects us to the environment; part of the motor cortex that would produce action is activated when thinking about the action

Family Resemblance

Family Resemblance family resemblance; each member has some but not all of the features of a category; a feature may not be common to all members of the category

feature search

Feature Search Feature search (also known as "disjunctive" or "efficient" search)[4] is a visual search process that focuses on identifying a previously requested target amongst distractors that differ from the target by a unique visual feature such as color, shape, orientation, or size.[5] An example of a feature search task is asking a participant to identify a white square (target) surrounded by black squares (distractors).[4] In this type of visual search, the distractors are characterized by the same visual features.[5] The efficiency of feature search in regards to reaction time(RT) and accuracy depends on the "pop out" effect,[6] bottom-up processing,[6] and parallel processing.[5] However, the efficiency of feature search is unaffected by the number of distractors present.[5] The "pop out" effect is an element of feature search that characterizes the target's ability to stand out from surrounding distractors due to its unique feature.[6] Bottom-up processing, which is the processing of information that depends on input from the environment,[6] explains how one utilizes feature detectors to process characteristics of the stimuli and differentiate a target from its distractors.[5] This draw of visual attention towards the target due to bottom-up processes is known as "saliency."[7] Lastly, parallel processing is the mechanism that then allows one's feature detectors to work simultaneously in identifying the target.[5] Conjunction Search

response enhancement

How is response enhancement related to attentional processing on the cellular level? Response enhancement of single cells is one of the ways in which attention could change the response of a cell. For instance, a cell that responds to a specific orientation (e.g., vertical) might give a stronger response in the presence of attention.

implicit/explicit memory

Implicit memory is a type of memory in which previous experiences aid in the performance of a task without conscious awareness of these previous experiences. This allows people to remember how to tie their shoes or ride a bicycle without consciously thinking about these activities. Explicit memory is the conscious, intentional recollection of previous experiences and information. People use explicit memory throughout the day, such as remembering the time of an appointment or recollecting an event from years ago.

Multimodal hypothesis

Multimodal hypothesis we have various representations tied to different perceptual and motor systems so that we can directly convert one representation to another

proactive interference

Old learning interferes with new memory.

Overconfidence

Overconfidence • People tend to be more confident after they have made the decision then after they made the decision • Random selection of subjects- asked people who were waiting in line and asked how confident are you that you will win your bet • Or ask people after they put money down on their bet • People who have placed the bet have higher confidence ratings • There is tendency that we are more sure after we have made the decision

Positron Emission tomography PET

Positron Emission tomography PET dependent on blood flow. hemodynamic(maps brains metabolic activity) invasive because it requires an injection of radioactive material. tracer enters the blood stream. creates images because increased metabolim in areas of the brain being used that leads to more traincer in that area. decay tracer makes positron (positive ion) hits an electron which detects a gamma ray! (couple decay effect) advantages of PET good spatial resolution quiet indirect measure of neural activity provides absolute quantitative data limitations of PET invasive(injecting radiative material the body) has a short lifespan with the radioactive injection

Propositional representation

Propositional representation a representation of meaning as a set of propositions; propositional information can be represented in networks that display the relationships among concepts. Argument an element of a propositional representation that corresponds to a time, place, person or object Propositional analyses represent memory for complex sentences in terms of memory for simple, abstract propositional units; people tend to remember the propositions they encounter but are insensitive to the actual combination of propositions Propositional networks Nodes: the propositions, relations and arguments, a location in the network, ideas ; links: the connections between nodes, associations between ideas Amodal symbol system elements within the system are inherently non-perceptual; abstract mental representations, not tied to specific visual or auditory perception of objects or events. Perceptual symbol system all information is represented in terms that are modality specific and basically perceptual

Prototype theories

Prototype theories An alternative to template theory is based on prototype matching. Instead of comparing a visual array to a stored template, the array is compared to a stored prototype, the prototype being a kind of average of many other patterns. The perceived array does not need to exactly match the prototype in order for recognition to occur, so long as there is a family resemblance. For example, if I am looking down on a London bus from above its qualities of size and redness enable me to recognize it as a bus, even though the shape does not match my prototype. There is good evidence that people do form prototypes after exposure to a series of related stimuli. For instance, in one study people were shown a series of patterns that were related to a prototype, but not the prototype itself. When later shown a series of distractor patterns plus the prototype, the participants identified the prototype as a pattern they had seen previously1.

sentence verification task

Sentence verification task - measures latency to respond to a sentence ("a canary is a bird"). The idea is that latency reflects organization. This technique has revealed four observations about memory: 1. the category size effect - people respond faster when the item is a member of a small category (poodle is a dog versus a poodle is an animal) 2. the typicality effect (relatedness) people are faster to respond to usual or typical members (poodle vs. bloodhound). "Fuzzy sets" not every member of a category is a good representative of that category (whales).Katz (1981) "a barrel is round" took .3s longer to verify than "a ball is round" 3. the context effect (priming effect) people respond faster to an item preceded by a similar itemMcNamara (1984) cities preceded by nearby cities were recognized faster 4. the true-false effect or the fast-true effect - we answer true items faster than false items (.17s)

socio-cultural approach

Socio-cultural - Behavior is shaped by the society, our culture, and our environment. Catherine A. Sanderson defines the socio-cultural perspective as a perspective describing people's behavior and mental processes as shaped in part by their social and/or cultural contact, including race, gender, and nationality. This perspective of psychology believes that our behavior is influenced by the society, our culture, and our environment. According to social psychologists, behavior has a social and cultural context, and these factors play a major role in shaping one's perceptions and behavior. This approach to psychology tries to find how social norms affect behavior and how social groups such as race, religion, or gender can influence the way we behave. A cross-cultural perspective studies how behavior changes across cultures.

Taylor and Tversky 1992;

Taylor and Tversky 1992; people are quite effective in constructing cognitive maps from verbal descriptions

serial position effect

The finding when asked to recall a list pf unrelated items, performance is better for the items at the beginning and the end of the list.

The generation effect

The generation effect is a phenomenon where information is better remembered if it is generated from one's own mind rather than simply read. Researchers have struggled to account for why generated information is better recalled than read information, but no single explanation has been sufficient.

elaborative rehearsal

The linking new information in short-term memory to familiar material stored in long-term memory.

The levels of processing model (Craik and Lockhart, 1972)

The levels of processing model (Craik and Lockhart, 1972) focuses on the depth of processing involved in memory, and predicts the deeper information is processed, the longer a memory trace will last. Craik defined depth as: "the meaningfulness extracted from the stimulus rather than in terms of the number of analyses performed upon it." (1973, p. 48) Unlike the multi-store model it is a non-structured approach. The basic idea is that memory is really just what happens as a result of processing information. Memory is just a by-product of the depth of processing of information, and there is no clear distinction between short term and long term memory. Therefore, instead of concentrating on the stores/structures involved (i.e. short term memory & long term memory), this theory concentrates on the processes involved in memory. Levels of processing: The idea that the way information is encoded affects how well it is remembered. The deeper the level of processing, the easier the information is to recall. Strengths The theory is an improvement on Atkinson & Shiffrin's account of transfer from STM to LTM. For example, elaboration rehearsal leads to recall of information than just maintenance rehearsal. The levels of processing model changed the direction of memory research. It showed that encoding was not a simple, straightforward process. This widened the focus from seeing long-term memory as a simple storage unit to seeing it as a complex processing system. Craik and Lockhart's ideas led to hundreds of experiments, most of which confirmed the superiority of 'deep' semantic processing for remembering information. It explains why we remember some things much better and for much longer than others. This explanation of memory is useful in everyday life because it highlights the way in which elaboration, which requires deeper processing of information, can aid memory. Weaknesses Despite these strengths, there are a number of criticisms of the levels of processing theory: • It does not explain how the deeper processing results in better memories. • Deeper processing takes more effort than shallow processing and it could be this, rather than the depth of processing that makes it more likely people will remember something. • The concept of depth is vague and cannot be observed. Therefore, it cannot be objectively measured. Eysenck (1990) claims that the levels of processing theory describes rather than explains. Craik and Lockhart (1972) argued that deep processing leads to better long-term memory than shallow processing. However, they failed to provide a detailed account of why deep processing is so effective. However, recent studies have clarified this point - it appears that deeper coding produces better retention because it is more elaborate. Elaborative encoding enriches the memory representation of an item by activating many aspects of its meaning and linking it into the pre-existing network of semantic associations. Later research indicated that processing is more complex and varied than the levels of processing theory suggests. In other words, there is more to processing than depth and elaboration. For example, research by Bransford et al. (1979) indicated that a sentence such as, 'A mosquito is like a doctor because both draw blood' is more likely to be recalled than the more elaborated sentence, 'A mosquito is like a racoon because they both have head, legs and jaws'. It appears that it is the distinctiveness of the first sentence which makes it easier to remember - it's unusual to compare a doctor to a mosquito. As a result, the sentence stands out and is more easily recalled. Another problem is that participants typically spend a longer time processing the deeper or more difficult tasks. So, it could be that the results are partly due to more time being spent on the material. The type of processing, the amount of effort & the length of time spent on processing tend to be confounded. Deeper processing goes with more effort and more time, so it is difficult to know which factor influences the results. The ideas of 'depth' and 'elaboration' are vague and ill defined (Eysenck, 1978). As a result, they are difficult to measure. Indeed, there is no independent way of measuring the depth of processing. This can lead to a circular argument - it is predicted that deeply processed information will be remembered better, but the measure of depth of processing is how well the information is remembered. The levels of processing theory focuses on the processes involved in memory, and thus ignores the structures. There is evidence to support the idea of memory structures such as STM and LTM as the Multi-Store Model proposed (e.g. H.M., serial position effect etc.). Therefore, memory is more complex than described by the LOP theory.

The peripheral nervous system (PNS)

The peripheral nervous system is all of the nervous system that is not the central nervous system (brain and spinal chord). Anatomically, this includes cranial and spinal nerves projecting to and from the central nervous system, as well as ganglia (collections of cell-bodies) and sensory receptors. In terms of both general function and anatomy, the peripheral nervous system is commonly divided into the somatic, autonomic and enteric nervous systems. The enteric nervous system controls gastrointestinal function.

bottum-up processing

These terms are also employed in neuroscience, cognitive neuroscience and cognitive psychology to discuss the flow of information in processing.[5][page needed] Typically sensory input is considered "bottom-up", and higher cognitive processes, which have more information from other sources, are considered "top-down". A bottom-up process is characterized by an absence of higher level direction in sensory processing, whereas a top-down process is characterized by a high level of direction of sensory processing by more cognition, such as goals or targets (Beiderman, 19).[3] A bottom-up approach allows for more experimentation and a better feeling for what is needed at the bottom. Other evidence suggests that there is a third combination approach to change (see Stewart, Manges, Ward, 2015).[12] According to college teaching notes written by Charles Ramskov,[who?] Rock, Neiser, and Gregory claim that top-down approach involves perception that is an active and constructive process.[6][better source needed] Additionally, it is an approach not directly given by stimulus input, but is the result of stimulus, internal hypotheses, and expectation interactions. According to Theoretical Synthesis, "when a stimulus is presented short and clarity is uncertain that gives a vague stimulus, perception becomes a top-down approach."[7] Conversely, psychology defines bottom-up processing as an approach wherein there is a progression from the individual elements to the whole. According to Ramskov, one proponent of bottom-up approach, Gibson, claims that it is a process that includes visual perception that needs information available from proximal stimulus produced by the distal stimulus.[8][page needed][better source needed][9] Theoretical Synthesis also claims that bottom-up processing occurs "when a stimulus is presented long and clearly enough."[7] Cognitively speaking, certain cognitive processes, such as fast reactions or quick visual identification, are considered bottom-up processes because they rely primarily on sensory information, whereas processes such as motor control and directed attention are considered top-down because they are goal directed. Neurologically speaking, some areas of the brain, such as area V1 mostly have bottom-up connections.[7] Other areas, such as the fusiform gyrus have inputs from higher brain areas and are considered to have top-down influence.[10][better source needed] The study of visual attention provides an example. If your attention is drawn to a flower in a field, it may be because the color or shape of the flower are visually salient. The information that caused you to attend to the flower came to you in a bottom-up fashion—your attention was not contingent upon knowledge of the flower; the outside stimulus was sufficient on its own. Contrast this situation with one in which you are looking for a flower. You have a representation of what you are looking for. When you see the object you are looking for, it is salient. This is an example of the use of top-down information. In cognitive terms, two thinking approaches are distinguished. "Top-down" (or "big chunk") is stereotypically the visionary, or the person who sees the larger picture and overview. Such people focus on the big picture and from that derive the details to support it. "Bottom-up" (or "small chunk") cognition is akin to focusing on the detail primarily, rather than the landscape. The expression "seeing the wood for the trees" references the two styles of cognition.[11] A bottom-up approach is the piecing together of systems to give rise to more complex systems, thus making the original systems sub-systems of the emergent system. Bottom-up processing is a type of information processing based on incoming data from the environment to form a perception. From a cognitive psychology perspective, information enters the eyes in one direction (sensory input, or the "bottom"), and is then turned into an image by the brain that can be interpreted and recognized as a perception (output that is "built up" from processing to final cognition). In a bottom-up approach the individual base elements of the system are first specified in great detail. These elements are then linked together to form larger subsystems, which then in turn are linked, sometimes in many levels, until a complete top-level system is formed. This strategy often resembles a "seed" model, by which the beginnings are small but eventually grow in complexity and completeness. However, "organic strategies" may result in a tangle of elements and subsystems, developed in isolation and subject to local optimization as opposed to meeting a global purpose. In top-down approaches, knowledge or expectations are used to guide processing. Bottom-up approaches, however, are more like the structuralist approach, piecing together data until a bigger picture is arrived at. One of the strongest advocates of a bottom-up approach was J.J. Gibson (1904-1980), who articulated a theory of direct perception. This stated that the real world provided sufficient contextual information for our visual systems to directly perceive what was there, unmediated by the influence of higher cognitive processes. Gibson developed the notion of affordances, referring to those aspects of objects or environments that allow an individual to perform an action. Gibson's emphasis on the match between individual and environment led him to refer to his approach as ecological. Most psychologists now would argue that both bottom-up and top-down processes are involved in perception. start with something simple and build up to something complex, driven by something in the environment, data-driven, stimulus-driven. Brain assembles specific features of shapes (angles and lines) to form patterns that we can compare with stored images we have seen before; ex. brain combines individual lines and angels to form pattern we recognize as number 4; trying to discriminate between different musical instruments in orchestral music. A "bottom-up" approach to changes one that works from the grassroots—from a large number of people working together, causing a decision to arise from their joint involvement. A decision by a number of activists, students, or victims of some incident to take action is a "bottom-up" decision. A bottom-up approach can be thought of as "an incremental change approach that represents an emergent process cultivated and upheld primarily by frontline workers" (Stewart, Manges, Ward, 2015, p. 241).[12]

Stroop effect

Three experimental findings are recurrently found in Stroop experiments. A first finding is semantic interference, which states that naming the ink color of neutral stimuli (e.g. when the ink color and word do not interfere with each other) is faster than in incongruent conditions. In psychology, the Stroop effect is a demonstration of interference in the reaction time of a task. When the name of a color (e.g., "blue", "green", or "red") is printed in a color that is not denoted by the name (e.g., the word "red" printed in blue ink instead of red ink), naming the color of the word takes longer and is more prone to errors than when the color of the ink matches the name of the color. The effect is named after John Ridley Stroop, who first published the effect in English in 1935.[1] The effect had previously been published in Germany in 1929.[2][3][4] The original paper has been one of the most cited papers in the history of experimental psychology, leading to more than 700 replications.[4] The effect has been used to create a psychological test (Stroop test) that is widely used in clinical practice and investigation. The effect was named after John Ridley Stroop, who published the effect in English in 1935 in an article in the Journal of Experimental Psychology entitled "Studies of interference in serial verbal reactions" that includes three different experiments.[1] However, the effect was first published in 1929 in Germany by Erich Rudolf Jaensch,[2] and its roots can be followed back to works of James McKeen Cattell and Wilhelm Maximilian Wundt in the nineteenth century.[3][4] In his experiments, Stroop administered several variations of the same test for which three different kinds of stimuli were created: Names of colors appeared in black ink: Names of colors in a different ink than the color named; and Squares of a given color.[1] In the first experiment, words and conflict-words were used (see first figure). The task required the participants to read the written color names of the words independently of the color of the ink (for example, they would have to read "purple" no matter what the color of the font). In experiment 2, stimulus conflict-words and color patches were used, and participants were required to say the ink-color of the letters independently of the written word with the second kind of stimulus and also name the color of the patches. If the word "purple" was written in red font, they would have to say "red", rather than "purple". When the squares were shown, the participant spoke the name of the color. Stroop, in the third experiment, tested his participants at different stages of practice at the tasks and stimuli used in the first and second experiments, examining learning effects.[1] Unlike researchers now using the test for psychological evaluation,[5] Stroop used only the three basic scores, rather than more complex derivative scoring procedures. Stroop noted that participants took significantly longer to complete the color reading in the second task than they had taken to name the colors of the squares in Experiment 2. This delay had not appeared in the first experiment. Such interference were explained by the automation of reading, where the mind automatically determines the semantic meaning of the word (it reads the word "red" and thinks of the color "red"), and then must intentionally check itself and identify instead the color of the word (the ink is a color other than red), a process that is not automated.[1] Experimental findings Stimuli in Stroop paradigms can be divided into 3 groups: neutral, congruent and incongruent. Neutral stimuli are those stimuli in which only the text (similarly to stimuli 1 of Stroop's experiment), or color (similarly to stimuli 3 of Stroop's experiment) are displayed.[6] Congruent stimuli are those in which the ink color and the word refer to the same color (for example the word "pink" written in pink). Incongruent stimuli are those in which ink color and word differ.[6] Three experimental findings are recurrently found in Stroop experiments.[6] A first finding is semantic interference, which states that naming the ink color of neutral stimuli (e.g. when the ink color and word do not interfere with each other) is faster than in incongruent conditions. It is called semantic interference since it is usually accepted that the relationship in meaning between ink color and word is at the root of the interference.[6] The second finding, semantic facilitation, explains the finding that naming the ink of congruent stimuli is faster (e.g. when the ink color and the word match) than when neutral stimuli are present (e.g. stimulus 3; when only a coloured square is shown). The third finding is that both semantic interference and facilitation disappear when the task consists of reading the word instead of naming the ink. It has been sometimes called Stroop asynchrony, and has been explained by a reduced automatization when naming colors compared to reading words.[6] In the study of interference theory, the most commonly used procedure has been similar to Stroop's second experiment, in which subjects were tested on naming colors of incompatible words and of control patches. The first experiment in Stroop's study (reading words in black versus incongruent colors) has been discussed less. In both cases, the interference score is expressed as the difference between the times needed to read each of the two types of cards.[4] Instead of naming stimuli, subjects have also been asked to sort stimuli into categories.[4] Different characteristics of the stimulus such as ink colors or direction of words have also been systematically varied.[4] None of all these modifications eliminates the effect of interference.[4] Brain imaging techniques including magnetic resonance imaging (MRI), functional magnetic resonance imaging (fMRI), and positron emission tomography (PET) have shown that there are two main areas in the brain that are involved in the processing of the Stroop task.[7][8] They are the anterior cingulate cortex, and the dorsolateral prefrontal cortex.[9] More specifically, while both are activated when resolving conflicts and catching errors, the dorsolateral prefrontal cortex assists in memory and other executive functions, while the anterior cingulate cortex is used to select an appropriate response and allocate attentional resources.[10] The posterior dorsolateral prefrontal cortex creates the appropriate rules for the brain to accomplish the current goal.[10] For the Stroop effect, this involves activating the areas of the brain involved in color perception, but not those involved in word encoding.[11] It counteracts biases and irrelevant information, for instance, the fact that the semantic perception of the word is more striking than the color in which it is printed. Next, the mid-dorsolateral prefrontal cortex selects the representation that will fulfil the goal. The relevant information must be separated from irrelevant information in the task; thus, the focus is placed on the ink color and not the word.[10] Furthermore, research has suggested that left dorsolateral prefrontal cortex activation during a Stroop task is related to an individual's' expectation regarding the conflicting nature of the upcoming trial, and not so much on the conflict itself. Conversely, the right dorsolateral prefrontal cortex aims to reduce the attentional conflict and is activated after the conflict is over.[9] Moreoever, the posterior dorsal anterior cingulate cortex is responsible for what decision is made (i.e. whether you will say the incorrect answer [written word] or the correct answer [ink color]).[9] Following the response, the anterior dorsal anterior cingulate cortex is involved in response evaluation—deciding whether the answer is correct or incorrect. Activity in this region increases when the probability of an error is higher.[12] Theories

Working memory

Working memory; briefly stores and processes selected information from the sensory registers. Another term for short-term memory that emphasizes the active processing component of this memory system.

concepts

concepts a class of category that subsumes a number of individual instances. an important way of relating these is through propositions, which make some assertion that relates a subject and a predicate

operational span

operational span an experimental procedure used to measure someone's working memory capacity

2

# of potassium ions

Akinetopsia aka motion blindness

Akinetopsia is an extremely rare disorder in which a patient has lost the ability to perceive motion in their visual field, despite being able to see stationary objects. Akinetopsia is attributed to specific lesions in the visual cortex. The term was introduced in 1991 by the neurobiologist Semir Zeki.

Alternating Attention

Alternating Attention Alternating attention is the ability of mental flexibility that allows you to shift your focus of attention and move between tasks having different cognitive requirements. It is alternating your attention back and forth between two different tasks that require the use of different areas your brain. You probably use alternating attention almost all the time. You constantly need to make sudden changes on your activities or actions which requires your attention to shift. You may use alternating attention when reading a recipe (learning) and then performing the tasks of recipe (doing). It could also be alternating between unrelated tasks such as cooking while helping your child with her homework.

epiphenomenon

An epiphenomenon is a secondary phenomenon that occurs alongside a primary phenomenon. ... (A side-effect is a specific kind of epiphenomenon that does occur as a consequence.) In philosophy of mind, epiphenomenalism is the view that mental phenomena are caused by physical phenomena, and cannot cause anything themselves.

Automatic Processing

Automatic Processing performance/thought with little awareness o Ex. Dirve car and talk on cell, sing with radio o Perform more quickly, less effort o Often inflexible (automatic) behavior o Multi-task o Put into automatic when: • Routine task • Easy task • Tired • Distracted • Low motivation Automatic Processing Example (coffee Machine) o control access to coffee machine- plant one person making coffee subject then plant someone waiting- they would randomly assign to 1 of 3 conditions • first the confederate behind subject asks If they can cut in front of you • first the confederate behind subject asks If they can cut in front of you because I'm running late ect.. 70% • first the confederate behind subject asks If they can cut in front of you because I have to make coffees. 70% Because they heard the word because

Availability Heuristic

Availability Heuristic • Sometimes when we are asked to make a decision, rather than do an exhaustive search it is based of how many examples come to mind • Who's made more movies (Tom Cruise or Bill Murray) if you are using availability - then you are more likely to pick Tom Cruise because you can remember more Tom Cruise movies rather thinking of small independent movies with Bill Murray even though Murray has made more • Begins with letter K or words with k in the third letter Easier to think of king kangaroo kite Rather than acknowledge bike, hike

limited capacity to attention

'Capacity theory is the theoretical approach that pulled researchers from Filter theories with Kahneman's published 1973 study, Attention and Effort (https://www.princeton.edu/~kahneman/docs/attention_and_effort/Attention_hi_quality.pdf - bottom of page 7 of the book, page 14 of the pdf) positing attention was limited in overall capacity, that a person's ability to perform simultaneous tasks depends on how much "capacity" the jobs require. Further researchers - Johnson and Heinz (1978) and Navon & Gopher (1979) - went further with Kahneman's study.[1] Shalom Fisch used Kahneman's capacity theory, just as others did for their research, to published a paper on children's understanding of educational content on television.[2] It is a communication theory based on model which is used to explain and predict how children learn from educational television programing. It is formed by combining cognitive psychology and limited capacity of working memory. Working memory is explained as having limited resources available for processing external information and when demands exceed capabilities, then the material will not be attended to. The model attempts to explain how cognitive resources are allocated when individuals, particularly children, comprehend both the educational and the narrative content of an educational program Capacity theory suggests that when educational content is tangential to the narrative content the two information sources compete with one another for limited resources in working memory. However, when the distance between educational content and narrative content is small, the process complement one another rather than compete for resources, therefore an individual is likely comprehend more.

*Defining Attribute Model

*Defining Attribute Model The idea that a concept is characterized by a list of features that are necessary to determine if an object is a member of the category.

*Exemplar Model

*Exemplar Model Info stored about the members of a category is used to determine category membership.

analogy

. The analogy between human cognition and computer functioning adopted by the information processing approach is limited. Computers can be regarded as information processing systems insofar as they: (i) combine information presented with stored information to provide solutions to a variety of problems, and (ii) most computers have a central processor of limited capacity and it is usually assumed that capacity limitations affect the human attentional system. BUT - (i) the human brain has the capacity for extensive parallel processing and computers often rely on serial processing; (ii) humans are influenced in their cognitions by a number of conflicting emotional and motivational factors.

effector

A muscle or a gland (respond to stimulus) that receives a message from a motor neuron

Atkinson and Shiffrin modal model

According to the Atkinson and Shiffrin modal model, memory is structured into three stores: sensory memory (a short duration store that encodes mainly visual and auditory information), short-term memory (consists of the information that is stored up to 30 seconds) and long-term memory (information that is stored for longer than 30 seconds). The information passes between the stores in a linear way and is passed from short-term memory to long-term memory through repetition.

Bottleneck theories

Bottleneck theories suggest that as attentional resources are limited, some filtering of information takes place; the issue is where in the system this occurs. Broadbent, investigating attention using auditory stimuli, suggests this happens early on, on the basis of physical characteristics, e.g. the location of sounds of their pitch. He presented three digits simultaneously to each ear, and participants were asked to report the digits either in pairs of by ears. The "pairs" condition, which involves constant switching from ear to ear, is more difficult. The easier "ears" condition reports from one location at a time, supporting the idea of selection on a physical basis. This theory suggests that nothing about meaning is processed. However, Treismann found that attention is switched to unattended information if this is meaningful, so there is some semantic processing of unattended information. This is supported by the cocktail party effect, where if we are in a conversation with someone, attention is switched if we hear our own name in an unattended conversation. Deutsch & Deutsch challenge bottleneck theories, and suggests that there are no resource limitations on processing; selection takes place at the response stage. Whether selection is early or late appears to depend on the situation. Lavie suggested that when selection takes place is determined by perceptual overload, i.e. the amount of information available to the senses. Where this is high, selection is early, and where it is low, selection is late.

egocentric representation

Brain region activated when egocentric representation parietal cortex Egocentric navigation the representation of "space as we see it"

Broca's area

Broca's area is a region of the frontal lobe of the left hemisphere. The area was identified as being involved in the production of speech by the French surgeon Pierre Paul Broca in 1861. Broca described a patient who had lost the use of speech, and was only able to pronounce the syllable 'tan', but was still able to comprehend spoken language, and communicate with hand gestures. On autopsy, the patient was found to have a lesion in what is now known as Broca's area (1).

Brooks, Baddeley, and Lieberman

Brooks - studied scanning of visual images Apparently, scanning a physically visual array conflicts with scanning a mental array. This strongly reinforces the conclusion that when people are scanning a mental array, they are scanning a representation that is analogous to a physical visual array. The conflict is spatial, not specifically visual. Baddeley & Lieberman - studied how the nature in Brooks' task is spatial rather than visual The spatial auditory tracking task produced far greater impairment in the image scanning task than did the brightness judgment task. They confirmed Brooks' task

behaviorism

Classical conditioning made famous by Ivan Pavlov, demonstrated how an organism learns behavior. This learning theory forms the base of behaviorism. According to behaviorist John B. Watson, human actions can be understood through the study of one's behavior and reactions. Behaviorism believes that behavior can be measured, trained, and modified. The behaviorist perspective of psychology proposes that all the things which organisms do are their behavior. According to this perspective, thinking and feeling are behavior. B.F. Skinner, a theorist in behaviorism is best known for the theory of radical behaviorism. It claims that animal and human behaviors are comparable and that the science of behavior is a natural science. It believes that our environment influences our behavior. Skinner said that human beings could generate linguistic stimuli, which would then guide their behavior. His theory focused on instructional control over human behavior. Behaviorism prevailed during the 19th century after which the cognitive perspective overtook it.

Declarative Memory or Explicit Memory

Declarative Memory or Explicit Memory Declarative memory refers to the type of memories we are able to consciously and deliberately access, and describe the contents of. Declarative memory is divided into the sub-categories of episodic memory (events) and semantic memory (facts).

inattentional blindness

Explain the notion of inattentional blindness. Inattentional blindness refers to a failure to notice an unexpected event because the observer is paying attention to other events in the visual field. The unexpected event would typically be quite noticeable to the observer, if he weren't occupied paying attention to something else. For instance, in Dan Simon's Gorillas in the Midst" experiment, many subjects were so busy counting the number of times players in white shirts passed a basketball between them that they didn't notice a person in a black gorilla suit walk through the scene! Inattentional blindness, also known as perceptual blindness, is a psychological lack of attention that is not associated with any vision defects or deficits. It may be further defined as the event in which an individual fails to perceive an unexpected stimulus that is in plain sight. The term inattentional blindness entered the psychology lexicon in 1998 when psychologists Arien Mack, PhD, of the New School for Social Research, and the late Irvin Rock, PhD, of the University of California, Berkeley, published the book, "Inattentional Blindness," describing a series of experiments on the phenomenon. In Mack and Rock's standard procedure, they presented a small cross briefly on a computer screen for each of several experimental trials and asked participants to judge which arm of the cross was longer. After several trials, an unexpected object, such as a brightly colored rectangle, appeared on the screen along with the cross. Mack and Rock reported that participants--busy paying attention to the cross--often failed to notice the unexpected object, even when it had appeared in the center of their field of vision. When participants' attention was not diverted by the cross, they easily noticed such objects. Following these initial findings, Mack and Rock discovered that participants were more likely to notice their own names or a happy face than stimuli that were not as meaningful to them--for example, another name or an upside-down face. Finally, the team found that even though participants did not detect the presence of unattended words that were presented on a computer screen, such stimuli nonetheless exerted an implicit influence on participants' later performance on a word-completion task. "I came away from our studies convinced that there's no conscious perception without attention," Mack says. She adds that the findings also led her to suspect that the brain undertakes considerable perceptual processing outside of conscious awareness before attention is engaged and that objects or events that are personally meaningful are most likely to capture people's attention.

Functional Magnetic Resonance Imaging FMRI

Functional Magnetic Resonance Imaging FMRI dependent of blood flow hemodynamic(dependent on hemoglobin) detects consumption of oxygen and blood in the brain. measures changes of blood flow that produces indirect measure of neural activity. detects changes in ratio of oxygenated to deoxygenated hemoglobin. ratio increases/increases dependent on whats active in the brain (BOLD) blood oxygenated level dependent. BOLD responses produces highly detailed maps of neuroactivity advantages of FMRI non-invasive good spatial resolution measures neural activity thats indirect depends on coupling of neural activity with oxygen use and blood flow no magnets limitations of FMRI has mediocre temporal resolution produce complex high level data (can be a little hard to interpret. expensive can be bad for claustrophobic people very time consuming

guided search

Guided search model A second main function of preattentive processes is to direct focal attention to the most "promising" information in the visual field.[17] There are two ways in which these processes can be used to direct attention: bottom-up activation (which is stimulus-driven) and top-down activation (which is user-driven). In the guided search model by Jeremy Wolfe,[23] information from top-down and bottom-up processing of the stimulus is used to create a ranking of items in order of their attentional priority. In a visual search, attention will be directed to the item with the highest priority. If that item is rejected, then attention will move on to the next item and the next, and so forth. The guided search theory follows that of parallel search processing. An activation map is a representation of visual space in which the level of activation at a location reflects the likelihood that the location contains a target. This likelihood is based on preattentive, featural information of the perceiver. According to the guided search model, the initial processing of basic features produces an activation map, with every item in the visual display having its own level of activation. Attention is demanded based on peaks of activation in the activation map in a search for the target.[23] Visual search can proceed efficiently or inefficiently. During efficient search, performance is unaffected by the number of distractor items. The reaction time functions are flat, and the search is assumed to be a parallel search. Thus, in the guided search model, a search is efficient if the target generates the highest, or one of the highest activation peaks. For example, suppose someone is searching for red, horizontal targets. Feature processing would activate all red objects and all horizontal objects. Attention is then directed to items depending on their level of activation, starting with those most activated. This explains why search times are longer when distractors share one or more features with the target stimuli. In contrast, during inefficient search, the reaction time to identify the target increases linearly with the number of distractor items present. According to the guided search model, this is because the peak generated by the target is not one of the highest.[23] What is guided search? Search in which attention can be restricted to a subset of possible items on the basis of information about the target item's basic features.

emotions

Human emotions: Acceptance Affection Aggression Ambivalence Apathy Anxiety Boredom Compassion Confusion Contempt Depression Doubt Ecstasy Empathy Envy Embarrassment Euphoria Forgiveness Frustration Gratitude Grief Guilt Hatred Hope Horror Hostility Homesickness Hunger Hysteria Interest Loneliness Love Paranoia Pity Pleasure Pride Rage Regret Remorse Shame Suffering Sympathy The word 'emotion' encompasses a broad range of feelings, behavior and changes in the body and mind. Noted professor and psychologist, Robert Plutchik listed the six primary or main types of emotions as follows - fear, joy, love, sadness, surprise and anger. These, he said, can be classified as primary, secondary and tertiary emotions. Along with the primary emotions, we also experience secondary emotions which are a direct reaction of the primary emotions. For instance, a person may feel ashamed or guilty after experiencing the primary emotion of fear. The other emotions included in this category are contentment, relief, optimism, pride and enthrallment.

haptic memory

Haptic memory represents SM for the tactile sense of touch. Sensory receptors all over the body detect sensations such as pressure, itching, and pain. Information from receptors travel through afferent neurons in the spinal cord to the postcentral gyrus of the parietal lobe in the brain. This pathway comprises the somatosensory system. Evidence for haptic memory has only recently been identified resulting in a small body of research regarding its role, capacity, and duration.[19] Already however, fMRI studies have revealed that specific neurons in the prefrontal cortex are involved in both SM, and motor preparation which provides a crucial link to haptic memory and its role in motor responses.[20] Relationship with other memory systems SM is not involved in higher cognitive functions such as consolidation of memory traces or comparison of information.[21] Likewise, the capacity and duration of SM cannot be influenced by top-down control; a person cannot consciously think or choose what information is stored in SM, or how long it will be stored for.[4] The role of SM is to provide a detailed representation of our entire sensory experience for which relevant pieces of information can be extracted by short-term memory (STM) and processed by working memory (WM).[2] STM is capable of storing information for 10-15 seconds without rehearsal while working memory actively processes, manipulates, and controls the information. Information from STM can then be consolidated into long-term memory where memories can last a lifetime. The transfer of SM to STM is the first step in the Atkinson-Shiffrin memory model which proposes a structure of memory.

Iconic memory

Iconic memory. The mental representation of the visual stimuli are referred to as icons (fleeting images.) Iconic memory was the first sensory store to be investigated with experiments dating back as far as 1740. One of the earliest investigations into this phenomenon was by Ján Andrej Segner, a German physicist and mathematician. In his experiment, Segner attached a glowing coal to a cart wheel and rotated the wheel at increasing speed until an unbroken circle of light was perceived by the observer. He calculated that the glowing coal needed to make a complete circle in under 100ms to achieve this effect, which he determined was the duration of this visual memory store. In 1960, George Sperling conducted a study where participants were shown a set of letters for a brief amount of time and were asked to recall the letters they were shown afterwards. Participants were less likely to recall more letters when asked about the whole group of letters, but recalled more when asked about specific subgroups of the whole. These findings suggest that iconic memory in humans has a large capacity, but decays very rapidly [10]. In a 2001 experiment, Vogel et al. presented a display of between 3 and 12 objects. After 900ms, a second display was presented that was either the same or had one object changed. The result was that the participants ability to decide if the two displays were identical was almost perfect with four objects, but steadily declined as the number of items in the display increased above four. This determined the capacity to be around four items.[11] Another study set out to test the idea that visual sensory memory consists of coarse-grained and fine-grained memory traces using a mathematical model to quantify each. The study suggested that the dual-trace model of visual memory out performed single-trace models [12]. Echoic memory

influences on perception

Influences on Perception Bottom-up processing Information processing in which individual components or bits of data are combined until a complete perception is formed Top-down processing Application of previous experience and conceptual knowledge to recognize the whole of a perception and thus easily identify the simpler elements of that whole Perceptual set An expectation of what will be perceived, which can affect what actually is perceived David Rosenhan and colleagues admitted as patients to various mental hospitals with "diagnoses" of schizophrenia Once inside they acted normal but the staff members only saw what they expected to see and not what was actually occurring The real patients were the first to realize that the psychologist were not really mentally ill Inattentional blindness The phenomenon in which we miss an object in our field of vision because we are attending to another Simons and his colleagues Showed participants a videotape of a basketball game in which one team is uniformed in white and the other in black He instructs them to count how many times the ball is passed from one player to another either on the white or black team About a third of participants typically fail to later recall the presence on the screen of even extremely incongruent stimuli (e.g. a man dressed in a gorilla costume) under such conditions David Strayer and his colleagues Studies showed that drivers often failed to perceive vehicles braking directly in front of them while engaged in hands-free cell phone conversations James Haxby Suggests that there is a "core system" of face perception that uses the universal features of the human face to make judgments about people's identities Cross-modal perception The way we combine information from two sensory modalities Social perception Facial expressions, the visual cues for emotional perception, often take priority over the auditory cues associated with a person's speech intonation and volume, as well as the actual words spoken

Memory for Pictures

Memory for Pictures compared subjects on memory for sentences vs. memory for pictures; subjects had higher rate of errors in the sentence condition than in the picture condition (where memory was nearly perfect); meaning of a picture more memorable than style picture memory memory for pictures tied to their interpretation (droodles with or without an explanation of their meaning)

neurons (nerve cells)

Neurons (nerve cells) have three parts that carry out the functions of communication and integration: dendrites, axons, and axon terminals. They have a fourth part the cell body or soma, which carries out the basic life processes of neurons. Like all organ systems, the nervous system can do its specialized functions because the cells that make up the nervous system are specialized. The cells in the nervous system are specialized both in how they work individually and how they are connected to each other. The nervous system contains two kinds of cells: neurons are the cell type (primarily) responsible for communication and integration in the nervous system. glia, which protect the neurons, but also modify their action. Neurons (nerve cells) have three parts that carry out the functions of communication and integration: dendrites, axons, and axon terminals. They have a fourth part the cell body or soma, which carries out the basic life processes of neurons. The figure at the right shows a "typical" neuron. Neurons have a single axon is the output of the neuron. Axons are long (up to several feet long), but thin - - sort of like a wire. They are designed both in shape and function to carry information reliably and quickly over long distances (communication). Axons usually branch to connect to go to different neurons. Axon terminals at the end of axons make the actual connection to other neurons. Axons carry information from the senses to the CNS (Central Nervous System, brain and spinal cord), from one part of the CNS to another, or from the CNS to muscles and glands, which generate the behaviors you do. Neurons usually have several dendrites (from the Greek dendron, for tree branches) are the input to a neuron. Dendrites are designed both in shape and function to combine information the information they get (integration). Most neurons have several dendrites, each of which may branch up to six times to collect signals from the axon terminals from other neurons that cover it. They are covered with synapses (connections) from many other neurons and combine the signals they get from these synapses

Network models. Collins and Quillian (1969)

Network models. Collins and Quillian (1969) proposed that semantic knowledge is underpinned by a set of nodes, each representing a specific feature or concept, which are all connected to one another. Nodes that related in some way, such as often coincident in time, are more strongly connected.Jun 19, 2016. The Hierarchical Network Model of Semantic Memory is a theory first introduced by Collins & Quillian in 1969. This was the 1st systematic model of semantic memory. Semantic memory refers to someone's long term memory. These are the ideas and concepts that are not from personal experience but more from common knowledge such as the sounds of letters and colours. The model suggests that semantic memory is organised into 2 categories, the first being nodes which is referred to being a major concept, such as an animal or a bird. The second is a property, also referred to as being the attribute or feature of the concept such as brown or wings. So, how does this theory link to the primary classroom? Last week in one of our education studies seminars it was mentioned that the average teacher expects a response from a child to answer a question within 1 second, it was argued that this is not enough time for children to be able to process the question, find the answer and then verbally present it and therefore more time should be allowed for a response by the teacher, without moving on to another child. This theory talks about how the time taken to regain the knowledge that a person has acquired over time sets off a stimulus that then activates a set of nodes, which then activate other related nodes causing the activation to become widely spread. For example, if a someone was asked 'Is a dog an animal', the time taken to respond to that question would depend on the amount of relations between the node which recognises a dog and an animal. Therefore this supports the argument that children should be allowed more time to be able to think of their answers. Cognitive economy refers to the information that is stored at one level of the hierarchy is then not repeated on any other levels (See diagram below). A fact or concept is stored at the highest level in which it applies to, e.g fish would be under the category animal and not just fish.

retroactive interference

New learning interferes with old memory. Newly gained knowledge interferes with stored old memory. ★ Old stored information interferes with newly formed memory. New memory replaces and inhibits the recall of previously stored memory, hence, leading to its loss. ★ Stored information projects itself forward and hampers the gain of new data. Cause ★ Preoccupation of the brain with learning new data causes faulty or no retrieval of old data. ★ The data being acquired is either highly similar or extremely contradictory to the previously stored data. Brain Structures Involved ★ Left anterior ventral prefrontal cortex ★ Frontal cortex ★ Ventrolateral prefrontal cortex ★ Left anterior prefrontal cortex Mechanisms Involved ★ Competition between old and new data ★ Unlearning of old data ★ Competition between old and new data Examples ★ Replacing your old password with a new one, causing one to forget the old password. A student confuses a concept learned previously with those learned later on. Recollection of only the old password and not the new one, after changing it. ★ A student is unable to grasp a new concept due to confusion caused by the already learned similar concepts. Loss of information or memories by way of interference can be reduced by the use of mnemonic devices or by regular rehearsal and recollection. A few studies have also found that sleep helps prevent retroactive interferences as chances of occurrence of interfering events is minimal during the sleep cycle.

neglect

What is neglect? Neglect is a disorder of visual attention, in which there is an inability to attend to or respond to stimuli in the contralesional visual field (typically the left visual field after a right parietal damage). Neglect can also involve half of the body or half of an object. The disorder seems to stem from an attentional processing problem rather than from a visual problem: the visual system seems to remain intact. How is extinction related to neglect? Extinction might be neglect in a milder form. Extinction is the inability to perceive a stimulus in the presence of another stimulus, typically in a comparable position in the other visual field. Neglect is a more severe disorder, in which the entire contralesional field might be affected.

Pandemonium architecture.

Pandemonium architecture. The pandemonium architecture was originally developed by Oliver Selfridge in the late 1950s. The architecture is composed of different groups of "demons" working independently to process the visual stimulus. Pandemonium architecture arose in response to the inability of template matching theories to offer a biologically plausible explanation of the image constancy phenomena. Contemporary researchers praise this architecture for its elegancy and creativity; that the idea of having multiple independent systems (e.g., feature detectors) working in parallel to address the image constancy phenomena of pattern recognition is powerful yet simple. The basic idea of the pandemonium architecture is that a pattern is first perceived in its parts before the "whole".[1] Pandemonium architecture was one of the first computational models in pattern recognition. Although not perfect, the pandemonium architecture influenced the development of modern connectionist, artificial intelligence, and word recognition models.[2] The pandemonium architecture was originally developed by Oliver Selfridge in the late 1950s. The architecture is composed of different groups of "demons" working independently to process the visual stimulus. Each group of demons is assigned to a specific stage in recognition, and within each group, the demons work in parallel. There are four major groups of demons in the original architecture.[3] Image demon Records the image that is received in the retina. 2 Feature demons There are many feature demons, each representing a specific feature. For example, there is a feature demon for short straight lines, another for curved lines, and so forth. Each feature demon's job is to "yell" if they detect a feature that they correspond to. Note that, feature demons are not meant to represent any specific neurons, but to represent a group of neurons that have similar functions. For example, the vertical line feature demon is used to represent the neurons that respond to the vertical lines in the retina image. 3 Cognitive demons Watch the "yelling" from the feature demons. Each cognitive demon is responsible for a specific pattern (e.g., a letter in the alphabet). The "yelling" of the cognitive demons is based on how much of their pattern was detected by the feature demons. The more features the cognitive demons find that correspond to their pattern, the louder they "yell". For example, if the curved, long straight and short angled line feature demons are yelling really loud, the R letter cognitive demon might get really excited, and the P letter cognitive demon might be somewhat excited as well; but the Z letter cognitive demon is very likely to be quiet. 4 Decision demon Represents the final stage of processing. It listens to the "yelling" produced by the cognitive demons. It selects the loudest cognitive demon. The demon that gets selected becomes our conscious perception. Continuing with our previous example, the R cognitive demon would be the loudest, seconded by P; therefore we will perceive R, but if we were to make a mistake because of poor displaying conditions (e.g., letters are quickly flashed or have parts occluded), it is likely to be P. Note that, the "pandemonium" simply represents the cumulative "yelling" produced by the system.

Psychodynamic approach

Psychodynamics is another important perspective of psychology. Sigmund Freud proposed the concept of psychodynamics. He suggested that psychological processes are actually the flows of psychological energy in the brain. This perspective studies how psychological processes drive our feelings and behavior. It focuses on the conscious and the unconscious parts of the human mind. Our mental forces could be emotional forces or those from interactions between the emotional and motivational forces acting at the subconscious level. According to Freud, ego lies at the core of all the psychological processes and our behavior mirrors the emotional processes active in our mind. The psychodynamic perspective assumes that our motives at the unconscious level influence our behavior and so do our childhood experiences, and that behavior is driven by instinct. It believes that every behavior has a cause and therefore, it is determined.

word superiority effect

Reicher (1969), name underlined (cued) letter, bottom up prediction? - equal performance across three conditions, top-down prediction? - predict better performance when target letter embedded in word - word provides context for what letter is

representational neglect and unilateral visual neglect

Representational Neglect The clinical neurological syndrome of representational (or imaginal) neglect, discovered in the late 1970s, is probably best understood in connection with the closely associated perceptual deficit known as left unilateral neglect (or hemineglect), descriptions of which can be found in the clinical literature as far back as 1876, and which has been quite extensively studied since the early 20th century (Halligan & Marshall, 1993; Bartolomeo, 2007). Although not everyone who suffers from perceptual unilateral neglect also has representational neglect, representational without perceptual neglect seems to be relatively rare (Coslett, 1997; Bartolomeo et al., 1994; Bartolomeo, 2002, 2007). Left unilateral neglect is a fairly common consequence of damage to the parietal cortex of the right hemisphere of someone's brain (although other parts of the hemisphere, notably the superior temporal cortex, may also be implicated). (Similar damage to the left hemisphere does not normally produce equivalent symptoms of right unilateral neglect, or, if it does, the symptoms are generally much milder, and recovery is relatively rapid (Driver & Vuilleumier, 2003).) The syndrome manifests itself as a failure to notice or pay normal attention to things and events to the victim's left, and to the left sides of objects. Sufferers, who are generally unaware of their problem, have been known to do such things as forgetting to shave (or apply makeup to) the left side of their face, forgetting to wear their left shoe, or failing to eat food on the left side of their plate despite complaining that they are still hungry. If asked to mark the midpoint of a horizontal line, they are likely to mark a point well to the right of center, and if asked to copy a drawing they will generally copy only the right-hand side, including few if any details to the left (see figure 1). Other sense modes as well as vision are also affected. Patients often fail to explore the left sides of objects when examining them by touch, and in acute cases may hold their head turned towards the right, and may fail to respond to questions from someone standing to their left, although they respond quite normally to someone on the right. Despite all this, victims of unilateral neglect are clearly not blind or otherwise insensible to things on their left side. They can see things there if their attention is explicitly drawn to them, and may even occasionally spontaneously notice things to their left, especially if nothing much is going on to the right (Halligan & Marshall, 1993; Bartolomeo & Chokron, 2001; Driver & Vuilleumier, 2003).

Representative Heuristic

Representative Heuristic- phone book example Subjects told randomly selected a random person out of phone book- here is a description of that person - Linda strong independent ect.. asked if linda is a Bank Teller or a Feminist Bank Teller- Automatic Processing People more likely to say feminist because the schema was activated - but because description activates schema you ignore the statistics that feminist bank tellers are a subset of the entire set of bank tellers

saccade

Saccades are rapid, ballistic movements of the eyes that abruptly change the point of fixation. They range in amplitude from the small movements made while reading, for example, to the much larger movements made while gazing around a room. Saccades can be elicited voluntarily, but occur reflexively whenever the eyes are open, even when fixated on a target (see Box A). The rapid eye movements that occur during an important phase of sleep (see Chapter 28) are also saccades. The time course of a saccadic eye movement is shown in Figure 20.4. After the onset of a target for a saccade (in this example, the stimulus was the movement of an already fixated target), it takes about 200 ms for eye movement to begin. During this delay, the position of the target with respect to the fovea is computed (that is, how far the eye has to move), and the difference between the initial and intended position, or "motor error" (see Chapter 19), is converted into a motor command that activates the extraocular muscles to move the eyes the correct distance in the appropriate direction. Saccadic eye movements are said to be ballistic because the saccade-generating system cannot respond to subsequent changes in the position of the target during the course of the eye movement. If the target moves again during this time (which is on the order of 15-100 ms), the saccade will miss the target, and a second saccade must be made to correct the error. What is the purpose of the saccadic eye movement system? The purpose of saccade is to place the image on the fovea as rapidly as possible. How rapid is a saccadic eye movement? Aprox 400-700 deg arc/sec Method of testing: 1- Use your nose as reference point. 2- Larger saccades are possible. 3- Repeat several times 4- Hold target within limit of gaze 5- Test vertical saccades 6- Initially do not restrain the head. 7- Repeat with head held straight. The examiner should determine: Are saccades of normal velocity? Are the saccades promptly initiated? Are the saccades accurate? Are the saccades correctly? Saccade - pulse The burst of activity about 8 ms before the eye start to move. Overcomes orbital viscous drag. The eye moves quickly from one position to another. [van Gisbergen] Pulse Burst of concentrated energy used to move the eye into a required position. [Leigh]

visual attention

State and discuss the factors that determine where a person looks in a visual scene, and support with research examples. 1. Saliency Map Stimulus salience -> areas of stimuli that attract attention due to their properties Color, contrast, and orientation are relevant properties 2.Scene Schema Observer understand the meaning of the scene. Previous knowledge and experience will now influence eye movements. Tend to focus on meaningful objects and not empty spaces. 3. Task Demands Task demands override stimulus saliency. Eye movements are usually proceeded by motor movements by fraction of a second. Describe one phenomenon where timing is critical to visual attention. One such phenomenon is known as the "attentional blink." In this case, there is a difficulty in perceiving and responding to the second of two target stimuli amid a rapid stream of distracting stimuli if the observer has responded to the first target stimulus within 200 to 500 ms before the second stimulus is presented.

attentional selection

What is attentional selection, and why is it important? Attentional selection is the ability to attend to specific properties of a display, which may require switching attention from previous properties without moving the eyes. This ability is important because it allows one to focus on the relevant information in a display rather than "getting lost" in the entire display. During attentional selection, different aspects of the display appear more prominent as one shifts attention to the property selected. Explain how attention affects perception. Attention is a vital aspect of perception because we cannot process all of the input from our senses. The term "attention" refers to a large set of selective mechanisms that enable us to focus on some stimuli at the expense of others. Though this chapter talked almost exclusively about visual attention, attentional mechanisms exist in all sensory domains.

chunking for STM

The grouping of information into meaningful units for easier handling by short-term memory.

Serial processing / Parallel processing

The information processing models assume serial processing of stimulus inputs. Serial processing effectively means one process has to be completed before the next starts. Parallel processing assumes some or all processes involved in a cognitive task(s) occur at the same time. There is evidence from dual-task experiments that parallel processing is possible. It is difficult to determine whether a particular task is processed in a serial or parallel fashion as it probably depends (a) on the processes required to solve a task, and (b) the amount of practice on a task. Parallel processing is probably more frequent when someone is highly skilled; for example a skilled typist thinks several letters ahead, a novice focuses on just 1 letter at a time.

long-term memory (LTM)

The portion of memory that is more or less permanent , corresponding to everything we "know".

The typicality effect

The typicality effect during categorization describes a phenomenon whereby typical items are more easily judged as members of a category than atypical items. Prior studies of the typicality effect have often used an inclusion task, which asks participants to assess whether an item belongs to a category. However, the correct exclusion of non-members is also an important component of effective categorization, which has yet to be directly investigated.

Brown-Peterson Task/STM

To prevent rehearsal participants were asked to count backwards in threes or fours from a specified random number until they saw a red light appear. This is known as the brown peterson technique. Participants were asked to recall trigrams after intervals of 3, 6, 9, 12, 15 or 18 seconds. Aim: To investigate the duration of short-term memory, and provide empirical evidence for the multi-store model. peterson and peterson Procedure: A lab experiment was conducted in which 24 participants (psychology students) had to recall trigrams (meaningless three-consonant syllables, e.g. TGH). To prevent rehearsal participants were asked to count backwards in threes or fours from a specified random number until they saw a red light appear. This is known as the brown peterson technique. Participants were asked to recall trigrams after intervals of 3, 6, 9, 12, 15 or 18 seconds. Findings: The longer the interval delay the less trigrams were recalled. Participants were able to recall 80% of trigrams after a 3 seconds delay. However, after 18 seconds less than 10% of trigrams were recalled correctly. Conclusion: Short-term memory has a limited duration when rehearsal is prevented. It is thought that this information is lost from short-term memory from trace decay. The results of the study also show the short-term memory is different from long-term memory in terms of duration. Thus supporting the multi-store model of memory. Criticisms: This experiment has low ecological validity as people do not try to recall trigrams in real life.

Visual orienting and attention

Visual orienting and attention A photograph that simulates foveation One obvious way to select visual information is to turn towards it, also known as visual orienting. This may be a movement of the head and/or eyes towards the visual stimulus, called a saccade. Through a process called foveation, the eyes fixate on the object of interest, making the image of the visual stimulus fall on the fovea of the eye, the central part of the retina with the sharpest visual acuity. There are two types of orienting: Exogenous orienting is the involuntary and automatic movement that occurs to direct one's visual attention toward a sudden disruption in his peripheral vision field.[14] Attention is therefore externally guided by a stimulus, resulting in a reflexive saccade. Endogenous orienting is the voluntary movement that occurs in order for one to focus his visual attention on a goal-driven stimulus.[14] Thus, the focus of attention of the perceiver can be manipulated by the demands of a task. A scanning saccade is triggered endogenously for the purpose of exploring the visual environment. Visual search relies primarily on endogenous orienting because participants have the goal to detect the presence or absence of a specific target object in an array of other distracting objects. Visual orienting does not necessarily require overt movement, though.[15] It has been shown that people can covertly (without eye movement) shift attention to peripheral stimuli.[16] In the 1970s, it was found that the firing rate of cells in the parietal lobe of monkeys increased in response to stimuli in the receptive field when they attended to peripheral stimuli, even when no eye movements were allowed.[16] These findings indicate that attention plays a critical role in understanding visual search. Subsequently, competing theories of attention have come to dominate visual search discourse.[17] The environment contains a vast amount of information. We are limited in the amount of information we are able to process at any one time, so it is therefore necessary that we have mechanisms by which extraneous stimuli can be filtered and only relevant information attended to. In the study of attention, psychologists distinguish between preattentitive and attentional processes.[18] Preattentive processes are evenly distributed across all input signals, forming a kind of "low-level" attention. Attentional processes are more selective and can only be applied to specific preattentive input. A large part of the current debate in visual search theory centres on selective attention and what the visual system is capable of achieving without focal attention.[17]

visual search

Visual search is a type of perceptual task requiring attention that typically involves an active scan of the visual environment for a particular object or feature (the target) among other objects or features (the distractors).[1] Visual search can take place with or without eye movements. The ability to consciously locate an object or target amongst a complex array of stimuli has been extensively studied over the past 40 years. Practical examples of using visual search can be seen in everyday life, such as when one is picking out a product on a supermarket shelf, when animals are searching for food amongst piles of leaves, when trying to find your friend in a large crowd of people, or simply when playing visual search games such as Where's Wally? Many visual search paradigms have used eye movement as a means to measure the degree of attention given to stimuli.[2][3] However, vast research to date suggests that eye movements move independently of attention, and therefore are not a reliable method to examine the role of attention.[citation needed] Much previous literature on visual search uses reaction time in order to measure the time it takes to detect the target amongst its distractors. An example of this could be a green square (the target) amongst a set of red circles (the distractors). Real World Visual Search In everyday situations, people are most commonly searching their visual fields for targets that are familiar to them. When it comes to searching for familiar stimuli, top-down processing allows one to more efficiently identify targets with greater complexity than can be represented in a feature or conjunction search task.[6] In a study done to analyze the reverse-letter effect, which is the idea that identifying the asymmetric letter amongst symmetric letters is more efficient than its reciprocal, researchers concluded that individuals more efficiently recognize an asymmetric letter amongst symmetric letters due to top-down processes.[7] Top-down processes allowed study participants to access prior knowledge regarding shape recognition of the letter N and quickly eliminate the stimuli that matched their knowledge.[7] In the real world, one must use his prior knowledge everyday in order to accurately and efficiently locate his phone, keys, etc. amongst a much more complex array of distractors.[6] While bottom-up processes may come into play when identifying objects that are not as familiar to a person, overall top-down processing highly influences visual searches that occur in everyday life.[6] During visual search experiments the posterior parietal cortex has elicited much activation during functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) experiments for inefficient conjunction search, which has also been confirmed through lesion studies. Patients with lesions to the posterior parietal cortex show low accuracy and very slow reaction times during a conjunction search task but have intact feature search remaining to the ipsilesional (the same side of the body as the lesion) side of space. [24] [25] [26] [27] Ashbridge, Walsh, and Cowey in (1997) [28] demonstrated that during the application of transcranial magnetic stimulation (TMS) to the right parietal cortex, conjunction search was impaired by 100 milliseconds after stimulus onset. This was not found during feature search. Nobre, Coull, Walsh and Frith (2003) [29] identified using functional magnetic resonance imaging (fMRI) that the intraparietal sulcus located in the superior parietal cortex was activated specifically to feature search and the binding of individual perceptual features as opposed to conjunction search. Conversely, the authors further identify that for conjunction search, the superior parietal lobe and the right angular gyrus elicit bilaterally during fMRI experiments. Visual search primarily activates areas of the parietal lobe. In contrast, Leonards, Sunaert, Vam Hecke and Orban (2000) [30] identified that significant activation is seen during fMRI experiments in the superior frontal sulcus primarily for conjunction search. This research hypothesises that activation in this region may in fact reflect working memory for holding and maintaining stimulus information in mind in order to identify the target. Furthermore, significant frontal activation including the ventrolateral prefrontal cortex bilaterally and the right dorsolateral prefrontal cortex were seen during positron emission tomography for attentional spatial representations during visual search.[31] The same regions associated with spatial attention in the parietal cortex coincide with the regions associated with feature search. Furthermore, the frontal eye field (FEF) located bilaterally in the prefrontal cortex, plays a critical role in saccadic eye movememnts and the control of visual attention.[32][33][34] Moreover, research into monkeys and single cell recording found that the superior colliculus is involved in the selection of the target during visual search as well as the initiation of movements.[35] Conversely, it also suggested that activation in the superior colliculus results from disengaging attention, ensuring that the next stimulus can be internally represented. The ability to directly attend to a particular stimuli during visual search experiments has been linked to the pulvinar nucleus (located in the midbrain) while inhibiting attention to unattended stimuli.[36] Conversely, Bender and Butter (1987)[37] found that during testing on monkeys, no involvement of the pulvinar nucleus was identified during visual search tasks.

shallow processing

We can process information in 3 ways: Shallow Processing - This takes two forms 1. Structural processing (appearance) which is when we encode only the physical qualities of something. E.g. the typeface of a word or how the letters look. 2. Phonemic processing - which is when we encode its sound. Shallow processing only involves maintenance rehearsal (repetition to help us hold something in the STM) and leads to fairly short-term retention of information. This is the only type of rehearsal to take place within the multi-store model.

motor neuron

carry response impulses from the brain and spinal cord to muscles or glands. The stimulus is a neurotransmitter released from an interneuron. These neurons send messages to muscles or glands. *Glands need a message from the (word) before they can carry out their function.

mental images

mental images analogical representations that reserve some of the characteristic attributes of our senses

slot structure

slot structure slots specify values of various attributed that members of a category possess Default values a typical value for a slot in a schema representation


Related study sets

AP Gov Chapter 15 Multiple Choice

View Set

F446 Chapter 3: Finance Companies

View Set

Chem 20: Ch. 13 Carbohydrates Study Guide

View Set