Psychology Cognitive Approach to Behavior Test

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

Echoic memory

auditory sensory store

Episodic memory

a person's unique memory of a specific event, so it will be different from someone else's recollection of the same experience.

Elaborative

a memory technique that involves thinking about the meaning of the term to be remembered, as opposed to simply repeating the word to yourself over and over. For example, you need to remember the term "neuron."

Core relational theme

refers to the summary of all the appraisal judgments that define specific emotions, determine the personal meanings that result from a particular patterns of appraisal about specific person-environment relationships

Primary appraisal

relates to deciding whether a situation is personally relevant. Motivation relevance: Is this situation to my goals, an emotion will only be experienced if the answer is positive. Motivational congruence: Is this situation favorable to my goal, emotion depends on answer. Accountability: Who is responsible for what is happening. The outcome of primary appraisal is not a full blown emotion but a basic negative and positive approach. Experiencing more specific emotions require secondary approval.

Amyloid-B protein

AD patients show abnormal levels of amyloid plagues: caused by amyloid b proteins, it accumulates and damages membranes of axons

fMRI cognition reappraisal

Cognitive reappraisal: reinterpreting the meaning of emotional stimuli in ways that change thru emotional value. When participants do this, fMRI show different levels of activation in different areas of the brain.

Attention

Control process responsible for transfer from SM to STS

repeated reproduction

In this car, the same participant produces all the reproductions

Loftus & Palmer (1974) - Automobile Reconstruction

Introduce Study: The idea that memory is a reconstructive process, is what forms the vast work/research on EWT by Loftus and her colleagues. Loftus has performed and demonstrated a vast majority of research intEWT, but the work with her fellow colleague, Palmer, proved to be one of her most significant research studies intEWT. The idea that memory is a reconstructive process, is what forms the work on EWT by Loftus and her colleagues. Link to question: Loftus claims that the nature (wording) of questions can influence witnesses' memory of an experience. Leading questions - that is, questions that are suggestive in some way (hints) - and post-event information facilitate schema processing which may influence accuracy of recall. Our memories can be affected (interfered) with by post-event information such as misleading questions. Aim: To investigate the effect of leading questions on eye witness testimony of an event Method: Participants (p's) were shown 7 films of car accidents (5-30 seconds) After each clip, p's were given a questionnaire asking: To give an account of the accident Number of questions, including the critical question "How fast were the cars going when they?" Verb in the critical question was changed to smashed/collided/hit/bumped/contacted Experimental conditions: Participants were split in 5 groups of 9 - each group were asked the question with a different verb Results: Results showed that that the speed estimates were influenced by the wording (verb) used. The more severe-sounding verb produced higher speed estimates For example, 'smashed' gave an estimated 9m/h higher than 'contacted' Conclusion: L & P concluded that the wording of the question did have an effect on the speed estimates given. Suggested it may be because: People are poor judges of speed People are affected by the wording of a question Findings can be explained by Bartlett's view of memory as an active reconstructive process. The verbs used in the various conditions activated slightly different schemas which influenced the speed estimates. In this study, information was received after witnessing the accident researchers used a leading question. Thus after the accident was reconstructed in the participant's mind, the schema that were influenced by the leading question relating to the different verbs associated with speech explains how reconstructive memory works. This study also supports the idea that when people witness complex events, they tend to report inaccurate and numeric details like time, distance and speed. Loftus' research indicates that it is possible to create a false memory using post-event information. These results indicate that memory is not reliable but like all research studies, there are some limitations that need to be considered relating to its validity/ecological validity (EV). Connection of study to question Although Loftus' research is still valid to some extent (especially the Automobile Reconstruction) as it relates to the unreliability of memory in EWT because it was found that the leading question asked to eye-witnesses caused a distortion of memory as the result of the reconstructive processes of memory. 'smashed' lead participants to remember the accident as more severe than 'contacted Therefore it is clear that leading questions can change/influence previously stored information in memory - (make us reconstruct memories). But, due to demand characteristics, it cannot be concluded that the verb in the leading question completely influenced participants' speed estimates, but played a part in its influence.

Retrieval

LTS to STS

Tau protein

Neurofillary tangles: AD patients have accumulation of tau proteins, supposed to support structure of neurons, tau is abnormal and the structure of neurons clapse

scripts

Scripts provide information about the sequence of events that occur in particular contexts (e.g. going to a restaurant, visiting the dentist, attending class).s

Iconic memory

Visual sensory store

Alzheimer's disease

serious degenerative brain disease, progression is continuous and irreversible

Declarative memory

the memory of facts, data, and events.

Bartlett (1932) - "War of the Ghosts"

A significant researcher into schemas, Bartlett (1932) introduced the idea of schemas in his study entitled "The War of the Ghost." Aim: Bartlett aimed to determine how social and cultural factors influence schemas and hence can lead to memory distortions. Method: Participants used were of an English background. Were asked to read "The War of the Ghosts" - a Native American folk tale. Tested their memory of the story using serial reproduction and repeated reproduction, where they were asked to recall it six or seven times over various retention intervals. Serial reproduction: the first participant reading the story reproduces it on paper, which is then read by a second participant who reproduces the first participant's reproduction, and so on until it is reproduced by six or seven different participants. Repeated reproduction: the same participant reproduces the story six or seven times from their own previous reproductions. Their reproductions occur between time intervals from 15 minutes to as long as several years. Results: Both methods lead to similar results. As the number of reproductions increased, the story became shorter and there were more changes to the story. For example, 'hunting seals' changed into 'fishing' and 'canoes' became 'boats'. These changes show the alteration of culturally unfamiliar things into what the English participants were culturally familiar with, This makes the story more understandable according to the participants' experiences and cultural background (schemas). He found that recalled stories were distorted and altered in various ways making it more conventional and acceptable to their own cultural perspective (rationalization). Conclusion: Memory is very inaccurate It is always subject to reconstruction based on pre-existing schemas Bartlett's study helped to explain through the understanding of schemas when people remember stories, they typically omit ("leave out") some details, and introduce rationalisations and distortions, because they reconstruct the story so as to make more sense in terms of their knowledge, the culture in which they were brought up in and experiences in the form of schemas. Evaluation: Limitations: Bartlett did not explicitly ask participants to be as accurate as possible in their reproduction Experiment was not very controlled instructions were not standardised (specific) disregard for environmental setting of experiment Connection of study to question Bartlett's study shows how schema theory is useful for understand how people categorise information, interpret stories, and make inferences. It also contributes to understanding of cognitive distortions in memory.

Sensory stores

A storage system that holds information in a relatively unprocessed form for fractions of a second after the physical stimulus is no longer available - stores sensory characteristics of a stimulus. Plays a vital role in filtering out useless information, enabling us to focus our attention on important details. Duration: decays rapidly Capacity: unlimited Coding: information is picked up by our senses and stored in this form

Short term store

Duration: 15-30 seconds (Atkinson & Shiffrin, 1971) Capacity: limited; 7 ± 2units (Miller, 1956) Coding: Acoustically (Baddeley, 1966) Information is lost unless it is rehearsed (via repetition) A limited-capacity memory system for storing information for brief periods of time. A & S (1968) see STM as a temporary storage depot for incoming information after it receives and encodes information from the sensory memory.

Speisman et al. (1964)

A supporting experiment which demonstrates how cognitive appraisals are affected by bodily responses (emotions) to stressful situations, which is illustrated by Speisman et al. (1964). Aim: To demonstrate the influence of appraisal on emotional experiences. Method: Participants were shown a 'stressful' film about 'unpleasant' genital surgery depicting Aboriginal boys have circumcision in the context of puberty. Accompanied by soundtrack, in which investigators manipulated the 'appraisal' of the surgery by showing the film with 3 conditions + 1 control: Trauma condition - pain experienced by boys and use of knife were emphasized Denial - boys anticipation of entering manhood pointed out thus de-emphasizing the pain (presented the p's as happy and deliberate) Intellectualization - soundtrack ignored emotional aspects of situation and emphasized traditions of aboriginal culture Silent - nothing Arousal state measured by galvanic skin response (GSR) measure of electrical conductivity of skin and indicator of autonomic arousal and heart rate. Findings: Observations and self-reports showed that participants reacted more 'emotionally' to the soundtrack that was more traumatic. Lowest in intellectualization and silent conditions. The way participants appraised (act of assessing someone or something) what they were seeing in the film affected their physiological experience in terms of emotion. Evaluation: Limitations: Methodological problems - It is possible that the participants' reactions were primarily affected by the music, not that the music affected the appraisal of the situation. Conclusion: Thus, according to appraisal theory, it can be concluded that the music affected the appraisal of the situation, which in turn affected the emotional reaction to it. ...the cognitive factor of how we appraise certain situations influences our emotional responses Connection of study to question This supports that cognitive factors DO interact in emotion to a great extent. State connection to cognitive interactions within emotion: Thus, Lazarus' theory of appraisal states that 'we experience emotions when interacting with our environment and appraise good and bad to our well-being. Lazarus suggests that the specific emotions experienced are determined by the pattern of answers the individual gives throughout the components of the primary and second appraisal.

Huettel et el 2009: fmRI decision

Aim: To determine whether decisions involving risk and ambiguity are similar or different types of decision making processes Method: Laboratory experiment (Brain Study) Procedure: Participants were placed within an fMRI machine and they were given pairs of monetary gambles. The pairs had ambiguous and risky decisions the participants had to choose from. Results: The prefrontal and parietal cortex showed increased activation during the choices. The participants who had a preference for making the ambiguous choice had increased activity in the lateral prefrontal cortex. Participants who had a preference for choosing the risky decision had increased activity in the posterior parietal cortex. Strengths: The uniform method of collecting data and the many controls allow the study to be highly replicated. The study also has researcher triangulation since the results in the Huettel et al. study was supported by the Parrot, 2000 study. Weaknesses: This is a brain study so the data is correlation and a clear cause effect relationship between the activation of certain parts of the brain and the risky behavior cannot be determined. The other issue is since the choices being made do not affect the participants life outside of the study there is a lack of ecological validity.

Peterson & Peterson: MSM

Aim: To investigate the duration of short-term memory, and provide empirical evidence for the multi-store model. Procedure: A lab experiment was conducted in which 24 participants (psychology students) had to recall trigrams (meaningless three-consonant syllables, e.g. TGH). To prevent rehearsal participants were asked to count backwards in threes or fours from a specified random number until they saw a red light appear. This is known as the brown peterson technique. Participants were asked to recall trigrams after intervals of 3, 6, 9, 12, 15 or 18 seconds. Findings: The longer the interval delay the less trigrams were recalled. Participants were able to recall 80% of trigrams after a 3 seconds delay. However, after 18 seconds less than 10% of trigrams were recalled correctly. Conclusion: Short-term memory has a limited duration when rehearsal is prevented. It is thought that this information is lost from short-term memory from trace decay. The results of the study also show the short-term memory is different from long-term memory in terms of duration. Thus supporting the multi-store model of memory. Criticisms: This experiment has low ecological validity as people do not try to recall trigrams in real life.

Brown & Kulik 1977: Flashbulb

Aim: To investigate whether dramatic, or personally significant events can cause flashbulb memories. Hypothesis: People will report vivid and detailed memories of events with high consequentiality and surprise such as the death of Princess Diana. Used retrospective questionnaire to assess the memories of 40 black and 40 white American male participants for the circumstances in which they learned of public events. The participants were asked the question "Do you recall the circumstances in which you first heard [about the event]...?" (Such as what they were doing, who informed them of the news, where they were...etc) Then the participants had to check either yes or no. If they checked yes, they were asked to write a free recall of the circumstance they were in, in any form or length. The questionnaire included the assassination of John F. Kennedy and Martin Luther King Jr. 2. The questionnaire was used to see if participants had flashbulb memories significant events, and they were based on the consequentiality of the event, so how much an impact the event had on the participants' lives. 3. Participants were also asked if they had flashbulb memories of personal events, such as the sudden loss of a loved one. Positive correlation between consequentiality of an event and flashbulb memories. 2. It was found that people said that they had very clear memories of where they were, what they did, and what they felt when they first learned about an important public occurrence such as the assassination of John F. Kennedy, Martin Luther King, or Robert Kennedy. The participants recalled the assassination of John F. Kennedy most vividly. 3. Of the 80 participants, 73said that they had flashbulb memories associated with a personal shock such as the sudden death of a close relative. . There are many later researches that support FBM. 2. Can be easily replicated. 3. Provides evidence to support anecdotal and personal experience of FBM. 4. Uses both black and white participants, so the findings can be representationally generalized to males of those two ethnicities Limitations: 1. Accuracy is doubted because data is collected through a questionnaire. 2. Accuracy of memory could not be measured. 3. Little evidence that emotion affects the encoding stage. 4. Rehearsal: if the event was very important to the individual, the event may have been rehearsed several times which strengthens the individual's memory of that event. People do not always know that an event is important until later. Neisser suggests that the memories are so vivid because the event itself is rehearsed and reconsidered after the event. 5. Post event information: post event information may alter individual's memories by changing, adding, or removing information about the event. 6. Low Participant variability and lacks cross-cultural validity: only male American participants were used. Findings cannot be representationally generalized to other cultures.

Biases in Thinking & decision making

An example of the effect of social or cultural factors on one cognitive process is the effect of schemas on memory. Define schemas Schemas are cognitive structures that organise knowledge stored in our memory. They are mental representations of categories from our knowledge, beliefs and expectations. Expand on schema Any information about particular aspects of the world the world, such as people, events, and actions are stored in a person's brain in the form of schema. The information that people are exposed to is affected by the society and culture they are in. Because people in different societies and cultures are exposed to different information, they will have different schemas. There are three different types of schemas Scripts - provide information about sequences of events that occur in particular contexts Self-Schemas - organize information we have about ourselves Social Schemas - represent information about different groups of people Schemas contain stereotypes and expectations acquired during life Explain briefly how schemas and memory interact Schemas are influenced by external factors such as social and cultural aspects, which then affect what is stored in our memory processes. Define Memory The cognitive processes whereby past experiences is remembered. Relationship between cultural influences on memory Memory content opens up a window through which we can observe cultural influences on the ways in which individuals attend to represent, organize, retrieve and share event information. Study This relationship will be investigated in the following essay, offering a balanced review of the influence of social and cultural factors, with a particular focus on cultural factors" including a range of arguments and factors and supported by appropriate evidence such as research/empirical studies. This study relates to the effect of culture on memory. Participants' recall of the story which was culturally-foreign to them was altered to be culturallyfamiliar when they were asked to recall, due to their schema (knowledge, background and past experiences). Hence, the culture in which people are brought up in influences how they recall and reproduce stories and events to others, introducing cognitive distortions in memory because of their mental representations in the form of schemas. Barlett's work (1932) demonstrated how schemas originating in one particular culture can affect how literature from another culture is recalled. His participants relied on schematic knowledge, acquired within their culture to understand and later recall a story from a different culture. Therefore, human cognition is culturally independent - in the way that cognitive abilities are influenced by the social and cultural context in which people live. The implication of these studies is that although the ability to remember is a universal intellectual requirement, specific forms of remembering are not universal, as factors such as cultural aspects are different, in that not cultures have the same memory strategies. As demonstrated by the studies, people learn to remember in ways that are relevant for their everyday lives. The studies established, in particular Bartlett's work, showed that memory is, to a significant extent, a construction; moreover, one that relies heavily on the schemas we develop in our cultural settings. And that the schemas we develop from our cultural backgrounds can influence the cognitive process of memory.

The tendency to seek out information that confirms pre-existing beliefs: Confirmation bias

Another common source of heuristics is the tendency to seek out information that confirms pre-existing beliefs. -Another possible cause of intuitive thinking -We have a tendency to seek out information that confirms pre-existing beliefs and ignore information that contradicts our beliefs +Confirmation Bias -Violates rules of logic (irrational) -Common for human thinking valuation of Bias in Thinking -Studies of bias are often in artificial labs +Ecological Validity- Would this happen in real life? +More real life scenarios should be used -People who hold paranormal beliefs often have an illusion of control (picking winning lotto numbers) +Often have Confirmation Bias -Studies in deception could be unethical +May induce stress

schema theory

Define schema theory Cognitive theory of processing and organizing information. Schema theory states that "as active processors of information, humans integrate new information with existing, stored information." Expand on schema theory Effects Existing knowledge stored in our memory (what we already know) and organized in the form of schemas will affect information processing and behaviour in specific settings. E.g. Information we already know affects the way we interpret new information and events and how we store it in our memory. It is not possible to see how knowledge is processed and stored in the brain, but the concept of schema theory helps psychologists understand and discuss what cannot be seen.

Evaluation of Schema Theory

Define strengths of schema theory: Supported by lots of research to suggest schemas affect memory processes knowledge, both in a positive and negative sense. Through supporting studies, schema theory was demonstrated in its usefulness for understanding how memory is categorized, how inferences are made, how stories are interpreted, memory distortions and social cognition. Define weaknesses of schema theory: Not many studies/research evidence that evaluate and find limitations of schema theory Lacks explanation It is not clear exactly how schemas are initially acquired how they influence cognitive processes how people choose between relevant schemas when categorising people Cohen (1993) argued that: The concept of a schema is too vague to be useful. Schema theory does not show how schemas are required. It is not clear which develops first, the schema to interpret the experiences or vice versa. Schema theory explains how new information is categorised according to existing knowledge. But it does not account for completely new information that cannot link with existing knowledge. Therefore, it does not explain how new information is organised in early life E.g. language acquisition Construct Validity

Experiments

Define what an experiment is? What is the purpose of an experiment? Experiments are used to determine the cause and effect relationship between two variables (independent (IV) and dependent (DV) variables). Outline how experiments are used Researchers manipulate the independent variable (IV) and measure the dependent variable (DV) Attempt to control as many extraneous variables as possible to provide controlled conditions (laboratory experiments) Experiments are considered a quantitative research method, however qualitative data may be collected as well Types of experimental settings There are three different types of experiments, which include a laboratory experiment, a natural (QUASI) experiment and a field experiment. Outline why experiments are used It is considered/perceived to be the most scientific research method Determines cause-effect relationship between two variables (IV & DV)

Ethics of Cognitive research: Clive Wearing

Ethics -Protection of participants -Consent -Right to withdraw -Confidentiality -Deception -Debriefing Clive Wearing Sacks (2007) Background: Clive Wearing was a musician who got a viral infection encephalitis. This left him with serious brain damage in the hippocampus, which caused memory impairment. He suffers: anterograde amnesia - impairment in ability to remember after a particular incident retrograde amnesia impairment in ability to remember before a particular incident. Wearing still has ability to talk, read, write, and sight-read music (procedural knowledge) He could not transfer information from STM tLTM. His memory lasted 7-30 seconds, and he was unable to form new memories. Ethical issues of this study There were a set of ethical issues in this study, which include: Consent Wearing did not give consent to being in a study His wife gave consent for him to be studied But Wearing would not remember being informed of the study or giving consent due to his short memory span Confidentiality violated Wearing's right to confidentiality Wearing's real name was revealed His case was revealed to the world of psychology But since Wearing's memory lasts a short period of time, he would not remember that his confidentiality was violated Withdraw Wearing would not remember being in a study or his right to withdraw and so would not express any desires to withdraw Debriefing Wearing was not debriefed But because of his short memory span, he would not know he is in a study and would not desire a debriefing

Lazarus' Theory of Appraisal (1982; 1991)

Explain theory: The appraisal theory of emotion is based on the evaluation of situations according to the significance they have for us, therefore it has more of a cognitive basis Suggests that cognition is essential. This theory states that emotion is experienced when, in our interaction with the environment, we assess our surroundings as to whether it is beneficial or harmful for our well-being. Appraisals are interpretations of situations and how they will affect one's well-being. Appraisals are both conscious and unconscious; contribute to the quality and intensity of an emotion. The appraisal theory is based on two concepts: Primary appraisal - where the organism assesses the significance or meaning of the event. Three components: Motivational relevance - relevance to goals? (If positive, then there is emotion) Motivational congruence - favourable to goals? (Positive emotion when yes, negative emotion when no) Accountability - who is responsible for what is happening? Secondary appraisal - when the organism appraises the consequence of the event and decides on how to act. It also has three components: Problem-focused coping - cope with a situation by changing it to make it less threatening for an individual to cope Emotion-focused coping - change the situation by how I feel about it (e.g. reinterpreting). Future expectancy - To what extent can I expect the situation will change?

Long term store

Holds a vast quantity of information, which can be stored for long periods of time. Information kept here is diverse and wide-ranging, including all our personal memories, general knowledge and beliefs about the world, plans for the future, and where our knowledge about skills and expertise is deposited. Duration: Long-lasting (perhaps for a lifetime), proposed that it could last for 48 years (Bahrick et al, 1975) Capacity: Unlimited Coding: Primarily semantic (Baddeley, 1966); but also acoustic and visual Information in the LTS can also be recalled via retrieval, bringing the information back to the STS

framing effect

Framing effect The most influential normative model of choice under uncertainty is expected utility theory In this theory you multiply the utility of an outcome me by the probability of that outcome, and choose the outcome that yields the highest number. For example, suppose you were choosing between two gambles: if you choose option A, you get $l0 for certain; if you choose option B, you get $200 with 6% probability. According to the normative theory, it is more rational to take a risk: the expected utility of option A is 10 * 1 = $10. whereas the expected utility of option B is 200* 0.06 = $12. However, numerous studies have demonstrated that in their real-life choices people do not always adhere to the predictions of the normative model. They seem to be too eager to take risk in some situations and too avoidant of risks in others, depending on seemingly irrelevant factors. In 1979 Daniel Kahneman and Amos Tversky proposed a descriptive theory of decision-making under risk that is known as prospect theory. The idea behind the theory was to take the normative expected utility model and to modify it as little as possible to explain the observed deviations from the normative model. They were successful, and prospect theory quickly gained popularity as a descriptive model of choice Prospect theory claims that individuals think about utilities as changes from a reference point (and the reference point may be easily changed by the way the problem is formulated).

Anderson and Pichert (1978)

Further support for the influence of schemas of memory on cognition memory at encoding point was reported by Anderson and Pichert (1978). Aim: To investigate if schema processing influences encoding and retrieval. Method: Half the participants were given the schema of a burglar and the other half was given the schema of a potential house-buyer. Participants then heard a story which was based on 72 points, previously rated by a group of people based on their importance to a potential house-buyer (leaky roof, damp basement) or a burglar (10speed bike, colour TV). Participants performed a distraction task for 12 minutes, before recall was tested. After another 5 minute delay, half of the participants were given the switched schema. Participants with burglar schema were given house-buyer schema and vice versa. The other half of the participants kept the same schema. All participants' recalls were tested again. Shorter Method: Participants read a story from the perspective of either a burglar or potential home buyer. After they had recalled as much as they could of the story from the perspective they had been given, they were shifted to the alternative perspective (schema) and were asked to recall the story again. Results: Participants who changed schema recalled 7% more points on the second recall test than the first. There was also a 10% increase in the recall of points directly linked to the new schema. The group who kept the same schema did not recall as many ideas in the second testing. Research also showed that people encoded different information which was irrelevant to their prevailing schema (those who had buyer schema at encoding were able to recall burglar information when the schema was changed, and vice versa). This shows that our schemas of "knowledge," etc. are not always correct, because of external influences. Summary: On the second recall, participants recalled more information that was important only to the second perspective or schema than they had done on the first recall. Conclusion: Schema processing has an influence at the encoding and retrieval stage, as new schema influenced recall at the retrieval stage. Evaluation: Strengths Controlled laboratory experiment allowed researchers to determine a cause-effect relationship on how schemas affect different memory processes. Limitations Lacks ecological validity Laboratory setting Unrealistic task, which does not reflect something that the general population would do Connection of study to question This study provides evidence to support schema theory affecting the cognitive process of memory. Strength of schema theory there is research evidence to support it.

Milner (1966) - HM

How does it reflect a case study? It was an in-depth study of HM's amnesia condition, which resulted from a head injury sustained when he was 9 years old, suffering epileptic seizures along with it. Why was a case study used? To study the unusual phenomena of how as a result of the removal of HM's hippocampus and parts of his temporal lobe, amnesia can occur, as removing these significant parts of the brain can damage the formation of memories or impact parts of a person's memory in general. To study the case of a man who suffered from anterograde amnesia (inability to form new memories) as a result of the removal of tissue from the temporal lobe including the hippocampus, which would not be able to be produced ethically in a laboratory experiment, as it would most likely cause a similar condition to the extent of HM's. A case study allowed researchers to observe HM's behaviour from when he was a young child, to his adolescent years, and after the surgery, in which they found out a link between the temporal lobe/hippocampus and memory which led to further research, findings and advances in later studies, helping improve people's understanding of certain disorders such as amnesia, and memory in general. This unusual phenomenon could not be studied using other research methods such as experiments or naturalistic observation and in-depth information would not be able to be obtained/collected if case studies weren't used.

Craik and Tulving, 1975

How does it reflect an experiment? IV: Depth or level of processing (shallow or deep) DV: Memory recall of the original words from a list of 180 words in which theriginal series of 60 words in the question were interrelated/mixed into the 180 word series. Experimental Type: Laboratory Experiment because the study was conducted in a laboratory setting and the IV was manipulated Why was an experiment used? Allowed a cause and effect relationship to be developed and recognised Cause: Level of (Shallow and Deep) Processing Effect: Affects memory recall The cause and effect relationship would not have been able to be found using other research methods (e.g. observational or interviews, etc.) It was the most suitable type to use for this particular study. Results Participants recalled more words that were semantically processed compared to phonemically and visually processed. Semantically processed words involve deep processing which results in more accurate recall.

serial reproduction

In a study, the first participant reads the original story and then produces it on paper the read by a second participant who reproduces it for a third participant etc.

Research methods on Cognitive research

In cognitive psychology, testable theories are developed about cognitive structures and processes which cannot be directly observed. These theories are tested using research methods such as experiments and case studies. At the CLA, the methods of investigation undertaken by cognitive psychologists range from laboratory experiments and case studies. They have in common the aim of obtaining relevant information on mental processes used to acquire, store, retrieve and apply knowledge about the world.

Tversky and Kahneman (1981): Framing effect

In one of their famous experiments, Tversky and Kahneman (1981) gave their subjects he following problem. Imagine that the USA is preparing for an outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimate of the consequences of the program al are as follows: (and the options were different for two independent groups of subjects) Group 1: Program A: 200 people will be saved Program b: there is 1/3 probability that 600 people will be saved and 2/3 probability that no people will be saved Group 2: Program A: 400 will die Program B: There is 1/3 probability that nobody will die and 2/3 that 600 people will die . Note that both choice sets are identical. The only difference is in how the situation is described, either in terms of potential gains ("will be saved") or in terms of potential losses ("will die"). Having said that, it is interesting that participants' choices in these two groups were reversed. Here is the percentage of individuals who chose each of the two programs in the two groups: Group 1 Group 2 22% 78% 72% Program A Program B 28% Table 3.10 Findings from Tversky and Kahneman (1981) Since nothing changed in the problem from the rational point of view, this reversal of choices cannot be explained by the normative (expected utility) model. The expected utilities of the two programs are the same. So how can we explain the deviation from the normative model? Presumably most individuals in group 1 choose program A because they are trying to avoid the risk of not saving anyone at all (2/3 probability). Whereas in group 2 individuals seemed to be more willing to take the risk: 400 deaths seems almost as bad as 600 deaths, so why not take a chance? Tversky and Kahneman explain this finding in terms of a shift in the reference point. In the first version the reference point is the future state (600 people dead), so the options are perceived as potential gains how many people can I save?). In the second version the reference is shifted to the present state (no one has died yet), so the options are perceived as potential losses (how many people can we lose?). 0, depending on whether outcomes are described framed") as gains or losses, subjects give different judgements: they are more willing to take risks to avoid losses and have a tendency to avoid risk associated with gains. In other words, avoid risks but take risks to avoid losses. This is known as the framing effect. People assign less value to gains and more value to losses. We selectively redistribute attention to potential outcomes based on how the problem is framed. -People are more risk aversive when the problem is described (framed) in terms of gains and less risk-aversive when it is described in terms of losses. -Leads to cognitive biases

Flashbulb memory

In psychology, these are called flashbulb memories, which are memories of learning something so shocking or surprising that it creates a strong and seemingly very accurate memory of learning about the event--but not the event itself. The name refers to the old process of taking a photo. When the photographer snapped the picture, the flashbulb would go off, thus indicating a moment in time that had been captured exactly as it appeared before him.

Strengths of MSM

Influential; early model that stimulated further research into memory processes Still accepted by most psychologists and is still widely used Considerable evidence for demonstrating the existence of STM and LTM as separate memory stores Differing via duration, capacity and coding Provides support for anterograde amnesia Based on considerable evidence and evidence for the model is gained from a variety of sources e.g. studies of brain damaged individuals Whereby these studies support the distinction between STS and LTS Some patients with amnesia suffer damage tLTM but not STM, and vice versa As demonstrated by Shallice & Warrington (1970); Milner (1966); Baddeley (1997) Demonstrates insight into different memory processes, such as: Demonstrates differences in encoding, i.e. STM = STM = acoustic, LTM = semantic Demonstrates differences in capacity, i.e. STM = 7±2, LTM has no limits Demonstrates differences in duration i.e. STM = approx. 20 seconds (Peterson & Peterson, 1959), LTM = 48 years (Bahrick et al.,1975). Demonstrates in ability to form declarative or procedural memories by patients with brain damage, amnesia.

Case Studies

Introduce the next research method (case studies) and relate it within the context. Like experiments, another key research method used frequently in the BLA is a case study. Outline how case studies are used In-depth study of an individual or small group Because of this, case studies obtain information that may not be identifiable by using other research methods Case studies are considered a qualitative research method, however quantitative data may be collected as well They involve the use of a combination of several research methods such as interviews and observations The conclusions are more valid than what may be gained from any of these research methods individually Outline why case studies are used To obtain enriched (especially qualitative) data and information about mediating processes which could not be gained in any other way. To study unusual psychological phenomena Stimulates new research into an unusual phenomena To study a particular variable that cannot be produced in a laboratory. For example, due to ethical or financial restrictions. To obtain other information they may not be able to get from other methods. Outline why case studies are not used limitations Researchers may develop more personal relationships with participants may result in subjective data or different behaviour of participants and researchers Results of case studies are affected by the researcher's interpretations may be subjective and influenced by the researcher's beliefs, values, and opinions May cost a lot of time, effort & money due to the amount of data and time of a case study Cannot be replicated Lacks population validity extent to which findings can be generalised to the whole population Small participant sample Especially if study investigates a unique phenomenon

Appraisal

Lazarus 1982, evaluation of situation according to significance they have to us, we experience emotions when we appraise events as beneficial or harmful to our wellbeing.

MSM

Memory is defined to be the mental process of encoding, storing and retrieving information. 2 Outline Memory Process Memory undergoes a series of stages in order to store its information. Encoding process: incoming information is organized and transformed so it can be entered into memory Storage process: involves entering and maintaining information in memory for a period of time Retrieval process: involves recovering stored information from memory so it can be used Describe the MSM: Proposed by Atkinson & Shiffrin (1968) The multi-store model (MSM) consists of three memory stores: Sensory memory (SM) Short-term memory (STM) Long term memory (LTM) ... that is used for different tasks. Duration: how long information can be stored Capacity: how much information can be stored Coding: in what form information can be stored

Wager et el 2008: fMRi emotion

Neurophysiological technique: Aim: To find what happens in the brain when someone uses cognitive reappraisal. Method: Laboratory experiment with repeated measures Procedure: Participants were given three different procedures of viewing images within an fMRI machine: The neutral condition where participants where asked to look at neutral images, the look negative condition where they were asked to view negative images and the reappraise negative condition where participants viewed negative images and where asked to reinterpret them positively. Results: The reappraisal was successful in lowering the emotional impact of the negative images. There was increased prefrontal activity, reduced activity in the amygdala and increased activity in the nuclear accumbens. The researchers concluded that the prefrontal cortex regulates emotion by reducing activity in the prefrontal cortex-amygdala pathway and increasing activity in the prefrontal cortex-nucleus accumbens pathway. Strengths: The research is supported by previous models of the prefrontal cortex impact on emotion regulation (research triangulation) and the research methodology is well controlled allowing easy replication. Weaknesses: The research could not control all confounding variables so the influence of other parts of the brain on reappraisal is not measured. Since brain studies are correlational is difficult to say with absolute certainty that the changes in the prefrontal cortex amygdala pathway and the prefrontal cortex nucleus accumbens pathway are what caused the lowering of the emotional impact of negative images and not the result of another brain function (no cause and effect relationship).

Reconstructive memory

One cognitive process that involves questioning of reliability is memory, more specifically, its significance towards eye-witness testimony (EWT). Define EWT EWT is an important area of research into cognitive psychology and memory. EWT is a legal term. It refers to an account given by people of an event they have witnessed. Give an example (optional) For example, they may be required to give a description at a trial of a robbery or a road accident they have witnessed. Where is EWT used? EWT is vital and used in legal systems as evidence in criminal trials in countries all over the world, which relies on the accuracy of human memory/EWT to decide whether a person is guilty or not. Therefore, the reliability of the testimonies is important as it determines ones precious future. State connection between memory and EWT Memory is very important and plays a significant role in EWT. Talk about reliability of memory in EWT Beforehand, EWT was generally seen as very trustworthy and convincing; Judges, jurors, police and parts of the law enforcement saw and treated EWT as very reliable. However research from various sources now shows that memory can be subjected to distortion and reconstruction. Researchers have demonstrated that memory may not be as reliable as we think through the use of DNA technology; psychologists have demonstrated that eyewitnesses can be wrong. Memories may be influenced by other factors than what was recorded in the first place, due to the reconstructive nature of memory. The term "reconstructive" refers to the brain's active processing of information to make sense of the world. State what you are doing in the essay (in terms of factor & cognitive process): Therefore, the reliability of memory in EWT will be investigated, considering the merits from both sides of the arguments regarding its reliability, *however with a focus on how EWT (in laboratory situations?) can be disturbingly inaccurate. This argument will be discussed in relation to appropriate evidence in the form of research studies and experiments. *(or)... by firstly demonstrating the inaccuracy/unreliability of EWT, then presenting a counterargument by introducing a study which refutes this idea, therefore coming to a conclusion (to an extent) of the reliability of memory in EWT. Introduce significant researcher intEWT, Elizabeth Loftus and her arguments One of the leading researchers in the field of EWT research, Elizabeth Loftus, supports Bartlett's idea of memory as reconstructive. The idea that memory is a reconstructive process is crucial to an understanding of the reliability of EWT, is the idea that eyewitnesses do not reproduce what they witness, but rather, reconstruct their memories on the basis of relevant schematic information (personal interpretation dependent on our learnt or cultural norms and values - the way we make sense of the world) thus illustrating how memory is unreliable, as our schemas can be misled or influenced (by culture, social and environment factors) and are not always correct. She expressed concern at the over-reliance on EWT's in court, with her research showing: That our memories can reconstruct information. Therefore Loftus has argued that EWT can be highly unreliable, because of the ability of our memories to reconstruct events. Give an example (optional) Many people believe that memory works something like a videotape. Where: Storing information is like recording and remembering is like playing back what was recorded, with information being retrieved in much the same form as it was encoded. However, memory does not work in this way. It is a feature of human memory that we do not store information exactly as it is presented to us. Rather, people extract from information the gist, or underlying meaning. In other words, people store information in the way that makes the most sense to them. We make sense of information by trying to fit it into schemas, which are a way of organising information. Schemas are mental 'units' of knowledge that correspond to frequently encountered people, objects or situations. They allow us to make sense of what we encounter in order that we can predict what is going to happen and what we should do in any given situation. These schemas may, in part, be determined by social values and therefore prejudice. Schemas are therefore capable of distorting unfamiliar or unconsciously 'unacceptable' information in order to 'fit in' with our existing knowledge or schemas. This can, therefore, result in unreliable eyewitness testimony. State relevance of example given --> link to question Bartlett tested this theory using a variety of stories to illustrate that memory is an active process and subject to individual interpretation or construction. In his famous study 'War of the Ghosts', Bartlett (1932) showed that memory is not just a factual recording of what has occurred, but that we make "effort after meaning". By this, Bartlett meant that we try to fir what we remember with what we really know and understand about the world. As a result, we quite often change our memories so they become more sensible to us.

cognition

is a term referring to the mental processes involved in gaining knowledge and comprehension. These processes include thinking, knowing, remembering, judging and problem-solving.

Maintenance

is the process of repeatedly verbalizing or thinking about a piece of information. Your short term memory is able to hold information about about 20 seconds. However, this time can be increased to about 30 seconds by using Maintenance Rehearsal.

The tendency to focus on a limited amount of available information:

Remember what you know about sensory memory and how it gets transferred into STM. Sensory memory has a high capacity but short duration. This info must be attended to to go to STM. However we can only pay attention to certain chunks of info at a time. From the sea of stimuli we single out a stimuli to focus on aka selective attention. This is mediated by preconceptions and expectations, info passes thru a lens of schema.

Positive & Negative effects of technology on cognition.

Research studies are now revealing that the widespread use of tech- nology is having both positive and negative effects on our students' atten- tion and memory systems. Because young brains are still developing, their frequent exposure to technology is actually wiring their brains differently from the brains of children in previous generations. as these so-called "digital natives" interact with their environment, they are learning how to scan for information efficiently and quickly. Technology allows them to be more creative and to access multiple sources of information, practically simultaneously. But all this comes at a cost. learning requires attention. Without it, all other aspects of learning, such as reasoning, memory, problem solving, and creativity, are at risk.How children develop attention is largely determined by their environ- ment. modern technology has thrust children into a world where the demands for their attention have increased dramatically. Distraction has replaced consistent attention, and, as we noted earlier, the capacity of working memory appears to be shrinking. Their brains are becoming accustomed to, and are rewarded for, constantly switching tasks, at the expense of sustainable attention. This constant switching from one task to another has a penalty. When students switch their attention, the brain has to reorient itself to the new task, further taxing neural resources. and because of working memory's limited capacity, some of the information from the first task is lost as new information from the second task moves in. Furthermore, the switching causes cognitive overload, a condition where the flow of information exceeds the brain's ability to process and store it. Consequently, the students cannot gain a deep understanding of the new learning or translate it into conceptual knowledge.

schemas

Schemas are cognitive structures that organise knowledge stored in our memory. They are mental representations of categories (from our knowledge, beliefs and expectations) about particular aspects of the world such as people, objects, events, and situations. Expand on schema Knowledge that is stored in our memory is organized as a set of schemas (or knowledge structures), which represent the general knowledge about the world, people, events, objects, actions and situations that has been acquired from past experiences.

Free recall

participants study a list of items on each trial, and then are prompted to recall the items in any order

self-schemas

Self-schemas organise information we have about ourselves (information stored in our memory about our strengths and weaknesses and how we feel about them).

social- schemas

Social schemas (e.g. stereotypes) - represent information about groups of people (e.g. Americans, Egyptians, women, accountants, etc.).

leading question

question that encourages a particular desired answer, often because of the way that the question is phrased.

Rehearsal

repetition STS to LTS

System 1 and System 2 thinking

System 1 and system 2 thinking We have already discussed the important distintction between normative models (for eyample, logic, theory of probability, utility theory) and descriptive models of thinking and decision- making. Attractive as they are in leading us to the t rational choice, normative decision theories are unrealistic when it comes to making decisions in real life. As already discussed, they do not account for: most limited computational capacity the influence of emotion on thinking o other goals that the decision-maker might have, for example, justifying the choice to others, confirming one's own belief or supporting self- esteem So, naturally, people use shortcuts and incomplete, simplified strategies which are known as heuristics. Heuristics may also be expressed as rules, which makes them an exciting area of research. If we identify and describe a set of common heuristics and prove that people actually use them in real-life decision-making scenarios, we will be able to predict what people are likely to think or do in certain situations. Moreover, we might be able to design computer intelligence that mimics human intelligence. Using heuristics leads to cognitive biases (which can be described effectively if you compare heuristics to the normative model for a particular situation). However, heuristics are useful. First, they save energy; we don't have to meticulously analyse all the aspects of the situation every time we are faced with a choice. Second, heuristics are often based on experience, which means that you used them before and it worked reasonably well. Of course the rule saying "if it worked before, it will work now" is not perfect, but it is reasonable enough for a variety of everyday situations. Daniel Kahneman in 2003 proposed an extension to the information-processing approach by differentiating between two independent systems, system l and system 2. This differentiation has become the core of his bestselling book Thinking, Fast and Slow (2011), which is a must-read if you have an interest in cognitive biases and behavioural economics. According to the theory, system 1 thinking is fast, instinctive, emotional, automatic and relatively unconscious, whereas system 2 thinking is slower, more analytical, logical, rule-based and conscious. System 1 is commonly referred to as "intuition". It has been argued that system 1 developed as an adaptive reasoning mechanism which is based on prior experience (and survival goals) and enables us to make fast and reasonably accurate decisions that have proved to be sufficiently successful in the past. System 2 evolved later with the development of language and abstract reasoning, and this enables us to overcome some of our immediate automatic responses and analyse the situation in greater depth. Due to this legacy, we use system l in the majority of common situations, but we switch to system 2 when the situation is unusual and complex or when we encounter difficulties with our intuitive response. By this reasoning, our thinking works sequentially: first, there is a fast and automatic system l response, and then this response is (or is not) corrected by the more conscious cognitive efforts of system 2. System 1 works better in "predictable" environments. Arguably, in today's world with its high degree of complexity, tremendous rates of the production of new knowledge and rapidly changing circumstances, individuals need to be much more flexible and adapt more quickly. So the cognitive demands placed on system 2 processing seem to be increasing. This makes the study of heuristics and biases associated with system 1, as well as the way in which descriptive models of thinking deviate from normative models, even more pertinent.

Secondary appraisal

The aim is to provide information about the individuals coping options in a situation. Problem focused coping: can I cope with the situation by changing it to make it less threatening. Emotion focused coping: can I change the situation by changing the way I feel about it, try to reduce emotional impact. Future expectancy: What extent can I expect that the situation will change

fMRI studies on Decision making

studies have identified brain regions involved in decision making: prefrontal cortex. Activation is stronger when decisions involve risk. Risky decisions- have several possible outcomes, which are known, betting on black or red in roulette. Ambiguous decisions- outcomes the probabilities of which are unknown, beach holiday without knowing the weather

reconstructive process

the act of remembering is influenced by various other cognitive processes including perception, imagination, semantic memory and beliefs, amongst others.

Procedural memory

the memory of how to do things.

rationalization

The process of making the story conform to the cultural expectations of the participants

Evaluation of Working Memory Model

The strength of the working memory model is that it is more sophisticated than the multi-store memory model and allows us to explain a wider range of phenomena (for example, participants performance in the dual task technique or observable effects of articulatory suppression) This model can integrate a large number of findings fro work on short-term memory. Subsequent research has also shown that there are physiological correlates to some of the separate components of the model. For example, distinctly different brain parts "light up" in brain scanning images when the task activates either the phonological loop or visuospatial sketchpad. Finally. on the plus side, the working memory model does not over emphasize the role of rehearsal. However, it should be noted that models of this degree of complexity are harder to test empirically. You must have noticed that all the experiments however complicated, are only designed to test one specific aspect of the model (for example the central executive). For complex models, it becomes increasingly difficult to design well controlled studies that would test the model in its entirety. This means that the model is difficult to falsify. Maybe as a consequence of this, and due to the existence of multiple potential explanations of the same experimental result, the exact role of some of the components of the model (the central executive and especially the episodic buffer) remains unclear. Similarly, it has been argued that the visuospatial sketchpad should be further divided into two separate components, one for visual information and one for spatial information. Finally, working memory only involves STM and does not take into account other memory structures, such as LTM and sensory memory. Strengths: empirical support from dual-task experiments, empirical support for central executive as a key component, phonological and visuos sketchpad, dynamic model ( not static) Limitations: Does not account for effects of practice or time, lacks evidence for how central executive works, imprecise- components could be further subdivided, incomplete only focuses on STM. Just like any model for a cognitive process, the working memory model is far from perfect. There is lots of evidence to support it, but it faces many challenges too. This should not be too surprising, because memory is a complex cognitive process, and although research has come a long way in the past few decades, psychologists still are not really all that close to cracking the mysteries and machinations of memory and other cognitive processes. There are so many variables involved that any model is bound to have some weaknesses. Even with all the evidence supporting the working memory model and its many components (such as the studies discussed above), the model still faces many challenges, and psychological understanding of working memory is likely to change in the coming years as a result of continuing research. The first criticism, of course, is that the working memory model is not really a complete theory of memory, because it focuses on STM. There may be a whole other set of processes going on in LTM that have any number of unknown impacts on STM processing, so the working memory model is sort of like a close-up of an incomplete picture. It is a good theory and it has stood up to and evolved with decades of research, but it still does not cover every stage of memory. Sensory memory is probably important too, and the working memory model does not really touch on it. Memory is also unreliable in some ways (as discussed further in 3.B.1), and the working memory does not account for how distortions of memory happen. Emotion too has been found to influence memory in various ways, but this isn't included in the model. Furthermore, despite revisions over the years, the working memory model still might not be precise enough. There is general agreement that the phonological loop should be split into two separate components: a phonological store and an articulatory control system. A similar argument is made against the visuospatial sketchpad, with some theorists arguing that it too should be split into separate components for visual and spatial memory. Blindness doesn't necessarily interfere with spatial awareness. Figure 8. Blindness doesn't necessarily interfere with spatial awareness. Credit: Dmitry Chulov iStock The argument for further subdividing the visuospatial component in particular is built on studies investigating spatial awareness in blind people. Jones (1975) conducted a meta-analysis of experiments on spatial awareness in blind people, and concluded that 'vision is not a necessary condition for spatial awareness' (Jones, 1975). Therefore, the visuospatial sketchpad might not be specific enough to how STM actually works, because visual and spatial inputs appear to be processed independently. Jones' evidence suggests that the visuospatial sketch pad should be subdivided into two components to more accurately reflect how memory is processed. There is also very little evidence for how the central executive works and what it does. Its functioning can be inferred, as seen in some of the dual task experiments, but the central executive is exceedingly difficult to isolate, probably because it is so multi-functional. It appears to be involved in some way in virtually every cognitive task, and so far it has been very difficult to measure because of that. Another limitation is that the working memory model does not really explain how processing abilities can change with practice or time. The Strobach et al. (2012) study discussed above is a good example of this, because it shows how executive functioning improved in the non-gamers after 15 hours of practice. The working memory model does not explain such improvements, so there's still a lot to learn about how the central executive functions. Strengths and limitations of the working memory model. Figure 9. Strengths and limitations of the working memory model. Any limitations of the working memory model are balanced out by its considerable strengths. The working memory model as it currently stands is the product of decades of research and criticism. It has been modified and updated on the basis of research findings over the years, and in this way the working memory model stands as an almost ideal example of scientific inquiry - it all comes down to data and an open mind. Baddeley, for his part, has revised the model multiple times, so it is in a state of constant refinement. Just as significantly, the working memory model is also supported by a seemingly never-ending range of empirical studies, only a few of which are described above. Furthermore, the working memory model is more realistic than the multi-store model of memory because it allows for dynamic processing. STM capacity, for example, is not static or fixed, but changes according to variables like word length or input modality (visual, spatial, verbal, etc.). While computer analogies might work to a very limited degree in explaining cognitive processing, the reality is that human mental processes are far more dynamic than computer processing, and the working memory model at least accounts for this, even if it cannot fully explain it. Similarly, the multi-store model of memory suggests that STM is a unitary system, kind of like a springboard for all short-term inputs, regardless of their modality. The working memory model provides a much more complete explanation of STM, and the independent visuospatial, phonological, and executive components are much more in line with the complexity of cognitive processes in humans. When it comes down to it, this means that the working memory model has more ecological validity than the multistore model. Furthermore, the multi-store model over-emphasises rehearsal in STM, and the working memory model goes way beyond that to allow for several types of cognitive processing within STM. Finally, the working memory model passes the 'application test', meaning that the model can be used to explain all kinds of cognitive tasks, including verbal reasoning, comprehension, reading, problem solving, visual processing, spatial processing, and so on. The multi-store model was a good start, but the working memory model is taking things to the next level. Baddeley et al. (1975) Aim: To test the effect of word length on memory span. Method: A series of experiments testing working memory capacity. First experiment: The researchers prepared lists of 4-8 words, with half of the lists using short words, and the other half using long words. The lists were presented in ascending order, and there was a 1.5 seconds delay between each word. Afterwards, participants were given 15 seconds to recall the words in the order they were presented. This continued until participants failed on all eight sequences, which was thought to indicate the extent of working memory capacity. Seventh experiment: This time, the words were visually presented while participants either remained silent as the control condition, or counted aloud as the articulation condition Results: In the first experiment, participants were able to recall more of both the shorter words and the shorter lists. In the seventh experiment, the silent group recalled more words than the articulation condition. Conclusions: First, word length appears to have an effect on memory span and the phonological loop has certain limits. Furthermore, working memory appears to be modality specific, meaning that visual and verbal inputs are processed by separate components of working memory. Strengths: Clear isolation of working memory components. Limitations: Very limited ecological validity in some ways, as memory in the real world is rarely so isolated. Methodological considerations: Experimental design is replicable. Use of a control group. Ethical considerations: There are no major ethical considerations for this study. All researchers conducting studies within the field of psychological research are expected to consider ethical guidelines as discussed in 1.A.5.

Illusory correlations and implicit personality theories : Chapman and chapman

The tendency to seek out information that confirms pre-existing beliefs is also seen in illusory correlations and implicit personality theories. An illusory correlation is a belief that two phenomena are connected when in fact they are not. You will come across illusory correlations in the sociocultural approach to behaviour when you study stereotypes, because illusory correlations are often believed to be the mechanism of stereotype formation Implicit personality theories are sets of beliefs that you have about the behaviour of others; you predict their behaviour on the basis of those beliefs. For example, you may implicitly believe that all muscular, bald men are dangerous (based on the history of your interactions with them in the past or maybe a number of movies that you have watched), and so you would avoid bald, well-built males in a variety of situations due to your (stereotyped) implicit personality theory What is the role of confirming pre-existing beliefs in the formation of illusory correlations and implicit personality theories? Chapman and Chapman (1969) demonstrated this in a sample of practicing psychodiagnosticians N 32) who used the Rorschach ink-blot test in their practice. They concentrated specifically on diagnosing male homosexuality. Prior research had revealed some Rorschach signs that are statistically associated with male homosexuality and some that are not. Two signs that had been shown to be clinically valid signs of male homosexuality are: e response on Card IV of "human or animal contorted, monstrous, or threatening"; examples would be "a horrid beast" or "a giant with shrunken arms" response on Card V of an "animalized human or humanized animal"; examples would be "pigeon wearing mittens" or "a woman, dressed as a bat" However, when asked to recollect their clinical experience and name the Rorschach signs that they had found to be most diagnostic of homosexuality, clinicians failed to mention these two signs and named other signs instead, for example, feminine clothing ("a woman's bra" in Card III), humans with confused or uncertain sex, male or female genitalia. All these signs had a strong verbal associative connection to homosexuality. When asked to rate the associative similarity of a sign with homosexuality, clinicians rated the similarity as for the popular (and invalid) signs and low the valid (and unpopular) ones. For example for contrary to statistical evidence, they said that seeing "a woman's bra" in Card III was a sign of homosexuality, and at the same time they failed to recognize seeing "a horrid beast" in Card V as one. Chapman and Chapman (1969) also studied whether naive observers would make the same errors as the clinicians did. Participants were students in an introductory psychology course. The fabricated clinical materials consisted of 30 Rorschach cards, on each of which there was one response about the ink blot and two statements about the patient who (allegedly) gave this response. For example, the typical card would show an ink blot and three statements: Response: "A pigeon wearing mittens" ° Statement: A man who said he has sexual . feelings towards other men Statement: A man who said he feels sad and . depressed much of the time. The response was taken either from a valid diagnostic category (for example, "A giant with shrunken arms" for Card IV) or an invalid (for example, "a woman's bra" for Card III). The two statements were taken from a pool of four symptoms-homosexuality (in 1969 when the study was conducted homosexuality was still considered to be a mental disorder), depression, paranoia, inferiority complex. The combinations of responses and statements were manipulated statistical relation between homosexuality and the frequency of any of the responses. However, when to all 30 cards (what kind of responses are most common for homosexuals?) participants readily named the invalid ones (for example, seeing "a woman's bra" in Card III). So, naive participants arrived at the same results, and used the same justifications, as experienced clinicians, and yet in neither of the groups were the results valid! Participants seemed to have a set of prior beliefs (probably based on common sense) and they were selectively interpreting available data to support, but not contradict, those beliefs. Even more strikingly, when in follow-up experiments Chapman and Chapman manipulated the valid signs to actually correlate with homosexuality (in most of the cases mentioning homosexuality on the card was coupled with a valid sign such as seeing a "horrid beast" in Card IV), this had practically no effect on the subjects' conclusions. They still failed to see the connection between homosexuality and the valid signs, and continued to see a connection with the invalid ones (such as seeing a "woman's bra" in Card III). Illusory correlations based on prior beliefs turn out to be quite stable and resistant to change even in the presence of counter evidence.

The theory of reasoned action and the theory of planned behaviour

The theory of reasoned action (TRA) aims to explain the relationship between attitudes and behaviours when making choices. This theory was proposed by Martin Fishbein in 1967. The main idea of the theory is that an individual's choice of a particular behaviour is based on the expected outcomes of that behaviour. If we believe that a particular behaviour will lead to a particular (desired) outcome, this creates a predisposition known as the behavioural intention. The stronger the behavioural intention, the stronger the effort we put into implementing the plan and hence the higher the probability that this behaviour will actually be executed. There are two factors that determine behavioural intention: attitudes and subjective norms. An attitude describes your individual perception of the behaviour (whether this behaviour is positive or negative) while the subjective norm describes the perceived social pressure regarding this behaviour (if it is socially acceptable or desirable to do it). Depending on the situation, attitudes and subjective norms might have varying degrees of importance in determining the intention. in 1985 the theory was extended and became what is known as the theory of planned behavior. This theory introduced the third factor that influences behavioral intentions: perceived behavioral control. This was added to account for situations in which the attitude is positive and the subjective norms do not prevent you from performing the behavior; however, you do not think you are able to carry out the action.

weapons effect

The weapon draws attention from the EW

Working Memory Model

The working memory model was proposed by Baddeley & Hitch (1974) as an alternative to the multi-store model of memory. It has been developed to directly challenge the concept of a single unitary store for short-term memories. The working memory model is based upon the findings of the dual-task study and suggests that there are four separate components to our working memory The most important component is the central executive; it is involved in problem solving/decision-making. It also controls attention and plays a major role in planning and synthesizing information, not only from the subsidiary systems but also from LTM. It is flexible and can process information from any modality, although it does have a limited storage capacity and so can attend to a limited number if things at one time. Another part of the working memory model is the phonological loop, it stores a limited number of speech-based sounds for brief periods. It is thought to consist of two components - the phonological store (inner ear) that allows acoustically coded items to be stored for a brief period and the articulatory control process (the inner voice) that allows sub-vocal repetition of the items stored in the phonological store. Another important component is the visuo-spatial scratch pad; it stores visual and spatial information and can be thought of as an inner eye. It is responsible for setting up and manipulating mental images. Like the phonological loop, it has limited capacity but the limits of the two systems are independent. In other words, it is possible, for example, to rehearse a set of digits in the phonological loop while simultaneously making decisions about the spatial layout of a set of letters in the visual spatial scratchpad. One strength of the WMM is that there is evidence to support the phonological loop. Baddeley (1975) word length effect (short words easier to recall than long). Prevention from being able to rehearse words by repeating an irrelevant sound. The word length effect was lost as articulatory suppression fills the phonological loop. A second strength of the WMM is that there is evidence to support the visuo-spatial scratch pad. Baddeley (1973) PPts hold a pointer with a moving spot of light whilst visualising the block capital letter F. Tracking and letter imagery tasks were competing for the limited resources of the visuo-spatial scratch pad. Where as the tracking and verbal tasks use separate components. One weakness of the working memory model is that the Central Executive is difficult to quantify. Little research has been done to understand the central executive. Nobody knows the capacity limitations of the central executive? Richardson (1984) Problems specifying the precise function of the central executive. It cannot be falsified. A second weakness of the WMM is that the research is lab based. whilst this in itself is not a problem, there is the possibility of lack of ecological validity, especially the artificial setting. However, the model could be argued to have mundane realism since the tasks given to participants COULD potentially represent experienced in daily life (e.g. riding your bike & listening to your ipod).

Semantic memory

to a portion of long-term memory that processes ideas and concepts that are not drawn from personal experience. Semantic memory includes things that are common knowledge, such as the names of colors, the sounds of letters, the capitals of countries and other basic facts acquired over a lifetime

The adaptive decision-maker framework

There is an increasing recognition of the fact that emotions may influence our thinking and decision-making. The consequences of decisions result in experiencing certain emotions. The memory of such emotions, and the anticipation of them, may then become one of the driving factors in decision-making. One of the models that includes emotions in the process of decision making is known as the adaptive decision-maker framework. Let's have a closer look at it In the classical information-processing approach which was dominated by normative models, the decision-maker was assumed to be completely rational, with complete knowledge and unlimited computational capacity. This was later doubted first by acknowledging that human computational capacity is not unlimited, therefore descriptive models should account for "bounded human rationality (Simon, 1955). We do not have the mental capacity to consider all aspects and nuances of a complex situation, evaluate and compare all the attributes of all the possible options, and accurately calculate risks and expected outcomes, especially under time constraints. So we should be using simpler decision making strategies that use less cognitive resources The next step in the same direction was to say that apart from exhibiting bounded rationality, people actually don't always try to make rational choices- accuracy of decisions is not the only driving force behind human choices. One example of an alternative goal is minimizing the cost or effort involved in the decision (people are not only looking for the best decisions; they sometimes opt for the easiest) The adaptive decision-maker framework (Payne Bettman and Johnson, 1993) postulates that people possess a toolbox of strategies that may be used in thinking and decision-making tasks so they may use different strategies in different situations. Some strategies for use when making a choice (considering a set of options or alternatives and picking the best one) are as follows Weighted additive strategy (WADD). This strategy is considered to be normative for attribute choice problems (choice problems involving multiple alternatives compared against multiple attributes). This is a maximizing strategy: for every alternative you multiply the value of every attribute by the importance (weight) of the attribute, then calculate the weighted sum, after which you choose the alternative where the weighted sum is the largest. In normative decision-making models (that mathematically justify the most rational choices) this is also known as calculating the utility" of a choice (hence the name for the normative model, utility theory (see above) This strategy requires a lot of effort. Lexicographic strategy (LEX). Choose the most important attribute and then the option that has the best value for that attribute Undoubtedly, this strategy is not optimal (in that it simply ignores a number of attributes) but it has been shown that in a variety of situations this strategy is actually reason a under some circumstances it does not lead to significant reduction in accuracy, yet does to a significant reduction in effort. . e Satisfying strategy (SAT). Determine specific cut-off point for every attribute. Then consider the first option. For every of this option, compare the value of attribute to the cut-off point. If at least one of the attributes is lower than the cut-of reject the option and consider the next one attribute all top when you reach an option that exceeds the cut-off points. If no option passes the the cut-off points are relaxed and the repeated. process is Elimination by aspects (EBA). Cho the most important attribute and eliminate the options that do not meet your requirements for this attribute. Then select the second most important attribute and eliminate options. Continue until only one option all requirements remains. Of course in real life decisions we do not consistently use one of the clear cut strategies. People have a toolbox of strategies they may or may not use. Our emotions and other irrational factors come into play. According to this framework. strategy selection is guided by goals. There are four meta goals proposed: Maximizing decision accuracy: WADD, Making an attempt to quantify all attributes and consider all possible attributes for all possible options. Minimizing the cognitive effect: LEX Sophie: Minimizing the experience of negative emotion. In real-life decision-making, some attributes or options can be emotion-laden. For example, you are choosing a car and you have ruled out one of the brands because its name creates unpleasant associations in your language. (This was the case with the Russian car brand Zhiguli" which had to be renamed because to the European ear it sounded like "gigolo", dramatically decreasing sales.) In another example you are choosing a house and you see one that exceeds your expectations, but you are not going to buy it because a violent crime happened in it several years ago. How can negative emotions impact decision-making? There are two competing hypotheses. two quick . o Hypothesis A-the negative emotion will interfere with the decision, compromising both the speed and accuracy of the decision. In this hypothesis emotion is not part of a decision-making model. Rather it is an external factor that has a negative impact on the process o Hypothesis B-decision-making will directly adapt to the negative emotion. In this case, emotion should be included in decision- making models as an integral part, since accounting for emotions would help us better understand and predict choice outcomes. As will be shown later, hypothesis B gained empirical support in research studies. Maximizing the ease of justification of a decision (to others or to oneself). The authors john Hand: of the adaptive decision-maker framework a that inclusion of this meta-goal explains a n of effects that had been established in research and decision-making models. One example is so-called asymmetric dominance effect which but could not be explained by existing thinking you will learn about later in this unit (see "Biases in thinking and decision-making") ATL skills: Thinking approach to behaviour? In particular, how does the function in isolation]? How does this model link to the principles of cognitive model relate to principle 3 (cognitive processes do not There's one important difference between the adaptive decision-maker framework and the theory of reasoned action. The theory of reasoned on is an example of a macro-level decision- making model. It focuses on the choice outcomes (for example, condom use) and relatively stable characteristics (such as attitudes, perceived norms) that might predict these outcomes. In other words, the theory deals with results of decisions on a large scale. On the other hand, the adaptive decision- maker framework is an example of a micro-level model. It focuses on the process of making a decision, the strategies being used when processing available information, and so on. Such models zoom in on decision processes on a smaller scale Undoubtedly, micro-level models attempt to describe processes that are more situation- dependent, fluid and complex. With such complex and transient objects of research, collecting self-report measures and analysing correlation patterns is no longer a valid method There should be other methods that allow a deeper insight into the nature of separate acts of decision making, an insight into the process rather than the outcomes of this process.

Weaknesses of MSM

There is emphasis on the amount of information taken into memory Focuses too much on the structure of memory systems rather than providing an explanation on how it works (functioning/ processing) Reductionist*, oversimplifying memory processes (Eysneck & Keane, 1995) - too simple Mechanical in transfer from one store to another Memory processes are more complex and flexible *a form of explanation or approach to understanding complex things by simplifying (or reducing) them to their most basic parts. Assumes that stores are single and unitary Unlikely that the diverse information in LTM is contained in one, simple, unitary store in same form Tulving (1972) suggests that LTM can be divided into episodic, semantic and procedural components, stored separately Cohen & Squire (1980) suggest LTM is divided into 2: Declarative memory: involves recollection of facts and events, includes episodic and semantic memory. Procedural memory: memories for how to do things. Evidence from amnesia patients who have poor declarative knowledge with no damage to procedural knowledge Spiers et al. (2001) Clive Wearing Baddeley, 1997 Atkinson and Shiffrin (1968) focused almost exclusively on declarative knowledge and did not account for procedural knowledge in their model. Model suggests that rote rehearsal is the only way information transfers from STM tLTM Too simple Ignores any other factors such as effort and strategies people employ to remember things Studies have questioned whether the more information is rehearsed, the more likely it is to be transferred tLTM Rehearsal may be what occurs in laboratory experiments but this lacks ecological validity Most people rarely actively rehearse information in daily life, yet information is constantly transferred into LTM (Eysenck and Keane, 1995) Rehearsal is not as important as the MSM suggests Increased rehearsal is no guarantee that information will be stored in LTM MSM under-emphasises interaction between stores transfer of information is strictly sequential information stays in LTM until retrieved Does not consider the possibility that LTM interacts and even directs other memory stores Sensory what is important to pay attention STM helps rehearsal or meaningful chunking

Thinking and Decision Making:

Thinking is to modify information: we break down information into lesser parts (analysis), bring different pieces of information together (synthesis), relate certain pieces of information to certain categories (categorization). Unlike other cognitive processes, thinking produces new information, combine and restructure knowledge to make new information, "going beyond the information given" Decision making is a cognitive process that involves selecting one of the possible beliefs or actions, that is, making a choice between some alternatives. It is closely linked to thinking because before we choose, we must analyze. Normative Models: describe the way that thinking should be. They assume that unlimited time and resources are available to make a decision. Utility theory: is the normative model for decisions involving uncertainty and trade-offs between alternatives. The rational decision-maker should calculate the expected utility (the degree to which it helps us achieve our goal) for each option and then choose the option that maximizes this utility. Normative models give us a standard against which real-life thinking and decision-making may be compared. These models are unrealistic, we need to take short cuts. Descriptive models: show what people actually do when they think and make decisions. They focus on an accurate description of real-life thinking patterns.

Serial position curve

This term is a memory-related term and refers to the tendency to recall information that is presented first and last (like in a list) better than information presented in the middle.

Two factor theory of emotion

Two factors interact to determine specific emotions: Physiological arousal An emotional interpretation and labelling of the physiological arousal

Effect of emotions on cognitive processing

What is emotion? Emotion can be defined as the body's adaptive response to a particular situation. Outline COGNITIVE factors in emotion Cognitive psychologists assume that conscious and unconscious mental processes can influence emotions. Focus more on mental aspects of emotions and how unconscious and conscious mental processes influence emotional experiences and actions. This guides cognitive and rational emotive therapies, which assume that cognitions and emotions are interrelated, and that negative cognitions will lead to negative emotions. Those negative emotions may come out of people's faulty interpretations of experiences, and that is by raising awareness of, challenging and changing those beliefs that may alter our mood. Explain the interaction between cognitive and biological factors in emotion Emotions can be initiated through physiological and cognitive factors. It is assumed that emotions consist of three components: Physiological changes (biological reactions) Subjective feeling of the emotion (cognitions) ...which then leads to an associated behaviour and thus emotion is expressed. Thus, cognitive and biological factors interact to produce an emotional response to an event. Therefore a bidirectional relationship exists between cognitive and biological factors in emotion

Primacy effect

When items at the beginning are easily recalled

Recency effect

When items at the end are easily recalled

Albarracin et al (2001)

conducted a meta analysis of TRA and TPB as models of condom use. The practical significance of the study lies in the fact that identification of important attitudinal or behavioral predictors of the frequency of condom use can help greatly in the prevention of HIV and STD epidemics. The meta-analysis comprised 42 published and unpublished articles and a total of 96 data sets (which were brought together in one combined data matrix. Fitting the models of TRA ad TPB into this data set and estimating the predictive validity of the models, they arrived at the following conclusions .Both the TRA and TPB are successful predictors of condom use. The average correlation between intention and behaviour in these models is 0.51. Notice that it is a weaker intention-behaviour association than that reported by Ajzen and Fishbein (1973). One possible explanation is that people generall have less control over condom use than other behaviours in some other domains It makes a difference whether behaviour is assessed retrospectively or prospectively In the former case, assessments of intentions and behaviour are carried out at the same time. In the latter case, intentions and behaviour are assessed at different time periods. Naturally, the intention-behaviour relationship is weaker for behaviours assessed prospectively (0.45) than for behavio assessed retrospectively (0.57). However even 0.45 is sufficient enough to say that the predictive validity of the model is high. . Thus, people are more likely to use condoms if they have previously formed the corresponding intentions. These intentions to use condoms appear to derive from attitudes, subjective norms, and perceived behavioural control" (Albarracin et al, 2001) It should be noted, however, that this study relies on the assumption that self-reported condom use is an accurate reflection of the participants' actual everyday behaviour. Another important limitation is that although studies like this are based on complex models and they quantitatively estimate the fit of the theoretical model to the observed data, they are still correlational. This means that the direction of causality, although plausible, is still just inferred. Is it possible, for example, that behaviour influences intentions rather than the other way around? Longitudinal studies in which intentions and behaviours are assessed at different ints in time can provide valuable insights into the direction of causality in decision-making models Finally, it should be noted that the study had a efforts lot of potential implications for HIV prevention

Luce, Bettman, Payne

hypothesized that task-related negative emotion will encourage decision makers to process information more exclusively (because they attach more importance to the accuracy of decision) and at the same time in a way that avoids emotionally difficult trade-offs between options. Twenty seven undergraduate students were asked to imagine they were members of a charity that provides children with financial support. Their task was too choose one child from a group of five children described (photo on desk top) Sophie: The importance of the attributes was explained in the following way .Willingness to learn and personality are important because children who score better on these attributes would be more likely to help others in their community Age is important because a relationship will have to be established with the child through correspondence which requires a certain maturity Family size is important because the entire family benefits from the charity Living conditions are important because relatively worse conditions. the charity should target children living in You will see that the attribute values conflict with each other-there is no dominant alternative that is, an alternative that would be best across all the attributes. The task was performed by using the "Mouse lab" computer program. In this software the choice was presented to the subjects in the form of a matrix (much like Table 3.5), but all information in the cells was hidden behind boxes that could be opened by a mouse click. The software recorded the order in which boxes were opened, the time spent in each box, and the final choice. The order in which boxes were opened was subjects used two patterns e observed through counting the number of times after opening box A, opening a box for the same alternative but a different attribute (alternative-based transitions) after opening box A, opening a box for the same attribute but a different alternative e (attribute-based transitions) Attribute-based transitions involve fewer trade-offs and so theoretically they help you to avoid making emotion-laden choices. For example, if you open the box saying that Kito's personality is "very good", it could be emotionally difficult for you to find out that Kito's willingness to learn is "very poor", since it creates a trade-off and poses a difficult dilemma. However, it is emotionally easier to avoid such trade offs and open "personality" boxes for other children In order to manipulate negative emotion, participants were split into two groups john Hand: . In the higher-emotion group participants wee provided with a more specific and extensive background text describing the children's situation. They were also told that the four eliminated children were not likely to receive enhance the perception of the choice as high-stakes. .In the lower-emotion group the background texts were more superficial and participants were told that the four remaining children were likely to receive support elsewhere. Results of the study supported the pattern that was predicted by the adaptive decision-maker framework. Participants in the higher-emotion group opened a larger number of boxes and spent more time on the task (which shows that they were processing information more extensively probably due to more importance attached to the accuracy of decisions). Participants in the higher-emotion group engaged more frequently in attribute-based transitions (which shows that they were avoiding emotionally difficult trade-offs between options).

information-processing

information processing is an approach to the goal of understanding human thinking in relation to how they process the same kind of information as computers


Ensembles d'études connexes

Abeka: Science Order and Design Reading Quiz N 7th Grade

View Set