201 Clair

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

Main processes in speech perception and comprehension

1. Decoding complex acoustic signal into phonetic segments 2. Identify syllables and words 3. Word recognition 4. Comprehension 5. Integrate into context.

Speech processing stages

1. Decoding complex acoustic signal into phonetic segments (extract discrete elements) 2. Identify syllables and words 3. Word recognition 4. Comprehension 5. Integrate into context

Reading processes (Balota et al, 1999)-

orthography, phonology (sounding out words), semantics syntax, higher level discourse integration.

Word superiority effect

Target letter more readily detected in letter string when forming a word than non word strings Pseudoword superiority: letters in pronounceable nonwords are recognised more quickly than in non pronounceable.

Genetic evidence for innate language abilities

Twin studies (e.g. Bishop et al 2006), 130 twin pairs showed strong genetic influences on both structural and pragmatic language impairments in children, therefore language innate?

Syntactic Ambiguity

(sometimes called structural) -Where the meanings of the component words can be combined in more than one way e.g. "wealthy men and women" •Either "wealthy men and wealthy women" •Or "wealthy men" and (not so wealthy) women •"kids make nutritious snacks" -We can see that the manner in which words are grouped in syntactic structure reflects the way meanings are combined by the semantic component of the grammar

Clair summary lecture 4, reading models

+Letter features and word recognition interact in connectionist (interactive activation) model • Interactive activation model does not consider phonology and context effects on reading • DRC model considers effects of orthographic, phonological and semantic effects on reading - consistent with different types of dyslexia seen after brain injury or developmental disorders • Connectionist triangle model is a highly interactive model that accounts for many aspects of typical and disordered reading, including effects of context (semantics) and consistency • Models of reading based on eye movement studies account for some reading phenomena (e.g. word frequency effects) and can be applied across languages

Dual Route Cascaded model (Coltheart et al, 2001)

- 2 main routes, both start with orthographic analysis (identifying and grouping letters on page into words) - Model proposes different processed when we read words vs non words. - Nonwords assumed to be read only via Route 1 (non lexical, phonological) whereas all routes can be used to read words. -Cascade because activation at 1 level passed on before processing is complete Explains types of dyslexia: -Surface dyslexia: Rely on route 1, can read regular and nonwords via grapheme to phoneme mapping. - Phonological dyslexia (more common): Very poor reading of nonwords and cannot read new/unfamiliar words as cannot do grapheme to phoneme mapping. -Deep dyslexia (uncommon): Problems reading unfamiliar words and inability to read non words. Semantic errors e.g. 'ship' read as 'boat.' Can't be solely on grapheme to phoneme mapping: pseaudoword pronunciation e.g. FASS said like MASS not FARCE or PASS. Due to frequency of combinations of words being said in certain ways? Criticisms: Ignores consistencies within letter groupings, those pronounced consistently are read faster.

Connectionist triangle model (Plaut et al, 1996)

- All information used to read words and nonwords- cooperative and competitive interactions. - 2 pathways: +direct orthography-> phonology + indirect pathway through word meaning -Elements in model similar to DRC model but elements very interactive, all are occurring rather than 1 system dominating, depending on the nature of the word as for DRC Simultaneous contributions by all three widely supported by reading research. -can account for different types of dyslexia according to varying levels of intactness of interactions and includes a specific learning mechanism.

Orthography v phonology

- Letter sound knowledge - Phoneme awareness: e.g. cat rhymes with sat and words can be made if 'c' removed from cat.

Sapir-Whorf Hypothesis

-Certain thoughts of an individual in a language cannot be understood by those who use another language -The way people think strongly affected by native language. Evidence suggests effects of language on thought are rather weak. -Robertson et al (2000) have evidence for: Bering only 5 basic words for colours, classified blue/ green as same. In which most similar, differences were according to language.

Phonology's role in reading: is it necessary and sufficient for successful reading?

-Common view: weak phonological model- phonological processing of visual words is fairly slow and inessential for word identification, compared to strong model: A phonological representation is a necessary product of processing printed words- even though explicit pronunciation on phonological structure is not required (Frost, 1998). Therefore, Strong phonological models predict phonological process and are mandatory and automatic This would argue it always occurs, even when hampering performance, some phonological coding occurs when word presented visual. Therefore studies homophones (sound the same, spelt differently) e.g. Van Orden (1987), more errors with rows than robs when looking at flowers.

Top down influences on speech perception

-Context effects in sentences impacting: eg Warren and Warren (1970) '*eel', missing phoneme filled in according to context. Ganong effect: Word context affects categorical perception, referred to as lexical identification shift. i.e. shift towards a real word if extremes = word and non word e.g. dash-tash. Myer and Blumstein (2007) found shift between /k/ and /g/ according to word assumption e.g. giss vs kiss.

Evidence for language and thought links

-Different languages have multiple words for important things within the environment e.g. 14 ways to describe rice in Philippines. Can be a barrier for educating communities on contraception

Reading compared to spoken language

-Each word separate, - No problems with coarticulationor speaker variability, - less memory load - punctuation marks aid interpretation -No speech 'reading' cues t aid perception -no acoustic prosodic cues to guide sentence structure/ meaning e.g. pitch, intonation, stress timing, ability to read versus understanding spoken language varies

Word meaning accessed without access to internal phonological representation

-Hanley and McDonnell (1997), Patient PS after stroke understands word meanings but can't pronounce correctly- doesn't seem to access internal phonological representations. Could only pronounce 38/80 words he defined flawlessly previously.

Dysexecutive agraphia

-Inability to write due to frontal lobe damage

Cross Modal Priming Experiment (Van Petten et al 1999)

-Measure N400 brain response to to final word in sentence. Context influence N400 brain evoked responses to behavioural word around 200ms before behavioural word recognition, suggesting an early influence of context on perception. Therefore, strong top down context effect early in processing. Thus aligning with cohort model.

TRACE model (McClellan and Elman, 1986)

-Network based on connectionist principles. Artificial neural network attempts to model the processing of speech from acoustic input to word extraction, in manner consistent with what is known of human perceptual abilities. - 3 levels of speech: features, phonemes, words - Key components: Speech feature nodes connect to phoneme nodes, connect to word nodes - Nodes influence each other in excitatory ways or inhibitory ways. -Connections at same level inhibit. - Bottom up activation from word features proceeds upwards, context effects cause downward activation. -Word recognised depends on activation level os possible candidate words. Therefore word superiority effect, stronger for 80% (high word) conditions -Mean reaction times faster for words -Context helped words more than nonwords Pros: • Explains lexical identification shift in categorical speech perception • Includes the interactive effect of top-down and bottom-up processes • Predicts word frequency effects on speech perception • Copes with noisy data (degraded input) Cons: • Exaggerates importance of top-down processing • If words are mispronounced, they are less accurately heard, but model assumes top down effect will compensate • Problems with variable timing of speech sounds and speech rates between speakers - information on timing of speech not included • Relies heavily on computer simulations of recognition of one-syllable words • Does not consider all factors such as written word form (orthography) effects on word recognition

-Phonology

-Sound system of language, regional differences but english described as having 45 phonemes - Individual phonemes are characterised by phonetic features: voicing, placing, manner e.g. /p/ bilabial stop, coded by IPA

Interactive activation Model of word recognition (McClellan and Rumelhart, 1981)

-Three levels of recognition: feature, letter and word. - When a feature is recognised it inhibits others e.g. HM activated, S inhibited. - letter then activated - All words with this letter in this place activated. Top down and bottom up processing, using frequency in predicting etc -Strengths: accounts for word and pseudo word superiority and top down lexical knowledge. - Limitations: No account for meaning role, too much emphasis on single letters- what about jumbled word reading?; doesn't account for longer words or phonological processing and nt based on real reading.

Lexical ambiguity

-Where a single word form has two or more meanings e.g. "Liz bought a pen" •Either an instrument to write with or a place to keep her new calf -The surrounding words and sentences (context) usually make the intended meaning clear

Phonological neighbours

-differ to word by one phoneme e.g. gate: get, got, bait, gape - If phonology influences words, words with more neighbours should have an advantage as are 'activated' - Yates et al (2008) found words within sentences with many phonological neighbours have shorter fixation times to read BUT if words sound very similar can actually slow process as compete with each other.

English evidence for phonological processing

-homophones - phonological neighbours - phonological priming eg /klip/---/clip/ But Maori kids above reading age yet poor phonology, don't have alphabetical language naturally.

Speech as form of music

-way we talk to babies - Recording of speech on repeat sounds musical - overlap in brain areas for perceiving speech and music

Dual Stream Model Speech/ language processing

2 Functionally distinct computational/ neural networks that process speech/ language information. One interfaces sensory/ phonological networks with conceptual-semantic systems and one interfaces sensory/phonological networks with motor-articulatory systems.

Informational masking

Cognitive load and attentional factors affecting perception.

Speech sounds

Decoding speech sounds involves detecting speech features and categorising the sound as a phoneme (a meaningful phonetic segment).

3 Route framework

For processing of spoken words proposed to account for performance of people with brain injury. -Evidence for 3 route model on word versus non word perception •Route 1: heard word activates meaning and spoken form •Route 2: meaning of heard words not activated •Route 3: rules about converting heard word into spoken form Using Route 2 input lexicon - word meaning deafness (Franklin et al., 1996) -Dr O could repeat words (80% accuracy) but not nonwords (7% accuracy) -94% accuracy at distinguishing words from nonwords -could understand written words, but problems with understanding spoken words • Route 3 phonological - deep dysphasia (aphasia) (Jefferies et al., 2007 -poor phonological production in word repetition, picture naming and reading tasks -cannot do phoneme manipulation tasks e.g. Take the /k/ off the word 'cat' - what word do you get? What word do you get it you add /r/ instead? What words rhyme?

Speaking vs writing

Four differences between speaking and writing 1. Speakers generally know who is receiving message 2. Speakers mostly receive moment-by-moment verbal and nonverbal feedback (e.g. boredom). Consequently, speakers often adapt what they are saying in response to feedback 3. Speakers generally have less time to plan their production than writers do. Which is why spoken language is shorter and less complex than written language 4. Writers generally have direct access to what they have produced so far, whereas speakers do not (Cleland & Pickering, 2006) • Spoken language fairly informal, simple in structure, rapid form of communication • Written language more formal and complex in structure • Consider brain-damaged patients who can speak fluently but not write (Kolb & Wishaw, 2003) Spoken language conveys meaning and grammatical information by prosody -rhythm, intonation, rate, volume -gesture is also used for emphasis •Written language relies heavily on punctuation to supply similar information to prosody, and to foreshadow or signal what is coming next (e.g. "on the other hand")

Energetic masking

Frequency content and loudness of competing sounds mask speech signal.

Word processing

Goldberg, Russell & Cook (2003) conducted a meta-analysis comparing word-processing and longhand. -They concluded that "students who use computers when learning to write are not only engaged more in their writing but they produce work that is of greater length and higher quality" •Maybe because word-processed essays tend to be better organised than those written in longhand (Whithaus, Harrison & Midyette, 2008) •Earlier work (Kellogg & Mueller, 1993) comparing the two formats found only small differences in writing quality or speed •The three stages (planning, sentence generation and revision) are essential to good quality writing - they are not influenced by the way in which the text is written

Nodes

Groupings of words that are connected i.e. THE GIRL (node 1) Knew (2) the answer (3) was wrong (4)

Reading research in psych

How printed word converted to speech, less emphasis on meaning extraction. Research approaches include: 1. Lexical decision task: Deciding if string of letters forms a word e.g. borough, kashel 2. Naming task: Say printed word out loud as quick as possible (accuracy measures) 3. Eye movement tracking: Measures attention to words, tracking, but not al reading processes 4. Priming: Prime word shortly before target word- prime related in either orthography or semantically or phonologically 5. Event related potentials: Time taken for processes to occur e.g. N400

Semantic priming effect (context effects)

Lexical decision task, is second word real? Faster decision time if previous word semantically related. Meta analysis agreeing very small but consistent effect. Neely (1977) used expected/ unexpected and same v different categories to predict expectations.

Motor theory (Liberman et al, 1967)

Listeners mimic articulatory movements of speakers- producing motor signals which facilitates speech perception. 'Please say shop' if 50 ms gap inserted before shop, we hear 'chop' Fadiga et al (2002) TMS (transcranial magnetic stimulation) applied to tongue region in motor cortex while listening to words. Found more activation for tongue muscle, when listening to words involving tongue movement. -This doesn't show causation links between motor areas and speech perception. D'Ausilio et al (2012) TMS to lip and/or tongue motor areas facilitated speech perception in noise condition but not quiet condition

Coarticulation

Overlapping of adjacent articulations. Speech sounds blur into one another Context of sounds influences how they are produced e.g. mashed potatoes, sixth Adjacent phonemes affect acoustics of a speech sound

Phonological priming

Primed with either klip or plip before clip. Processing faster for phonologically identical than similar spelling but no phonology (Rastle and Brysbaert, 2006). -This said to be due to lexical representation activation of word associated to phonological prime. So, the time savings reflect time taken for representations to reach a critical threshold. -Shown by lots of studies over last two decades, major evidence for rapid, automatic and obligatory phonological processing in lexical access.

Gillon 2003 Phonological processes:

Storing, retrieving and using phonological codes in memory, phonological awareness and speech production -Phonological awareness is explicit understanding of words and sound structures -Phonics: letter sound and knowledge - Phonological awareness: rhyme, phoneme deletion and substitution.

NVN strategy

The NVN strategy •Active the woman saves the man SUBJECT VERB OBJECT AGENT THEME •Passive the man is saved by the woman SUBJECT VERB OBJECT NOT AGENT NOT THEME

Psycholinguistics

The study of the psychological processes involved in language is known as psycholinguistics (Harley, 1995) • Formulated as a discipline in the 1950s, but has its origins in the previous century • Early psychological approaches saw language as a simple device which could generate sentences by moving from one state to another (i.e. input/output) • Chomsky was primary antagonist of behaviourism suggesting transformational grammar -was a good explanation of intuitions of language and language acquisition, but not good as an explanation of processes involved in language

Filik and Barber (2011) silent reading accents

Used limericks and eye movement, also tongue twisters take longer to read further strengthening argument inner speech resembles that of outer.

Inference drawing

We use a lot of implicit information when pulling together sentences heard/ used in discourse •Logical inferences -Depend only on the meanings of words -e.g. "widow" •Bridging inferences/Backward inferences -Establish coherence between current and preceding text •Elaborative inferences/Forward inferences -Embellish or add details to text from general knowledge •Theoretical controversy about the number and nature of inferences drawn •Bransford et al (1972) -Constructionist approach -Numerous elaborative inferences drawn when we read a text •McKoon and Ratcliff (1992) -Minimalist approach -Only a few inferences are drawn automatically • Readers (& listeners) typically form bridging inferences to make coherent sense of text • People rapidly use contextual information and knowledge of the world to form inferences • Many inferences are often drawn automatically • Readers' goals are important in determining whether predictive inferences are drawn • Readers with superior reading skills draw more inferences than other readers

Categorical speech perception

When acoustics are manipulated in a continuous way so that speech sounds shift from one phoneme to another rather. The shift in perception is categorical (all or nothing) rather than continuous. -Infants are universal discriminators until about 9 months when they tune in to own language(s) - In adults, acoustic differences ay be discriminated but the sounds are still labelled as same speech sounds. - eg in Japan /l/ and /r/ not discriminated between, therefore sounds are in same category as not contrasting phonemes. - 'da' and 'ta' same continuum, distinguished by voice onset time.

Segmentation

When does 1 sound end and another begin Helped my certain sequences never being found together within a syllable e.g. tooK Father's Constrained by words possible in the language e.g. wuffapple 'apple' harder to hear than 'fapple' because wuff more like a word than f Stress patterns important acoustic cue: english expect stress on first syllable.- misperceive speech when stress pattern wrong.

Broca's and Wernicke's areas

When we speak a heard word, activation proceeds from Wernicke's area to Broca's area through the arcuate fascicles Some truth in the distinction between Broca's & Wernicke's aphasia. BUT...... •Implies that numerous brain-damaged people all have similar patterns of impairment •Several brain areas are involved in language processing and they're interconnected in complex ways •People with Broca's aphasia sometimes also have damage to Wernicke's area and vice versa •Finding that people with Broca's aphasia have greater difficulty speaking grammatically is language specific •Traditional view focuses too much on specific impairments (e.g. word finding) but people with aphasia can also have problems with attention and memory

Whole language vs phonological coding

whole word 'kite' can be broken into k-i-t-e. But languages e.g. Kanji have written symbols that can't be broken in to parts

Language production

• A goal-directed activity • Communication primary purpose • Motivational and social factors need to be considered (share information, be friendly ....) • A theory of language alone is insufficient to account for language production • We know more about speaking than writing/typing -knowledge -ability •In both forms of language production, it is assumed there is an initial attempt to decide on the overall meaning to be communicated -The actual words to be used are not considered at this early stage •This is followed by production of language (in either form)

Writing and working memory

•People find writing difficult because of the many different cognitive processes involved (attention, thinking, memory....) • the central executive helps with organising and co-ordinating cognitive activities (along with the visuo-spatial sketchpad & the phonological loop) •Writing can involve any and all of these working memory components -Writing quality lower when text written entirely in upper case letters

Garden Path Model

• A grammatically correct sentence designed to lure a reader into an incorrect parsing of the sentence • Used to illustrate that when we read, we process language one word at a time • Start to read the sentence correctly (i.e. build up a structure of meaning), but eventually it becomes clear that the next word or phrase cannot be incorporated into the structure built up thus far. We have been lead up/down the garden path • Only one syntactic structure initially considered for any sentence • Meaning not involved in selection of initial syntactic structure • The simplest syntactic structure is chosen (using minimal attachment and late closure) -The grammatical structure producing the fewest nodes (noun phrase, verb phrase) is preferred [minimal attachment] -New words encountered in the sentence are attached to the current phrase/clause if grammatically permissible [late closure • Initially we parse this as an active intransitive sentence, but struggle when we reach "fell." We backtrack and look for other possible structures • After re-reading we realise that "raced past the barn" is a reduced relative clause, implying that "fell" is the main verb • The correct reading is "The horse (that was) raced past the barn - fell" • The example hinges on the lexical ambiguity of the word "raced". It can be either a past-tense verb or a passive participle. Compare to an unambiguous sentence with the same syntactic structure: The Democratic Presidential nominee helped to her car stumbled. The verb "helped" is passive, thus eliminating the garden path reading Late closure: Assigns direct object to first clause (assumers) • Evidence (from eye movement data) that readers follow the principles of minimal attachment and late closure • What is more crucial for the GPM is that semantic factors do not influence the construction of the initial syntactic structure • Breedin & Saffran (1999) reported a patient who had severe loss of semantic knowledge due to dementia. The patient performed normally on syntactic tasks, suggesting that semantic knowledge is not required This ambiguity could be reduced with correct punctuation often i.e. comma usage. Strengths • Simple and coherent account • Minimal attachment & late closure often influence selection of initial sentence structures Limitations • Word meanings can influence assignment of structure • Prior context and misleading prosody can influence interpretation earlier than assumed • Hard to test the model • Does not account for cross-language differences

Parsing

• A method of resolving words and sentences into component parts (to facilitate meaning) • In order to parse, we need to work out when different kinds of information are used. That is, we need to separate out the syntax (structure) and the semantics (meaning) -In reality, the two occur without conscious thought. Native speaker intuition tends to guide/inform parsing • Four possibilities for the relationship between syntactic and semantic processing -Syntactic analysis generally precedes (and influences) semantic analysis -Semantic analysis usually occurs prior to syntactic analysis -Syntactic and semantic analysis occur at the same time -Syntax and semantics are very closely associated, and have a hand-in-glove relationship •We use parsing to understand what is meant in this sentence •Prosodic cues are used to parse spoken utterances -stress, intonation, rhythm, word duration Pausing helped listeners to parse sentences correctly •Numerous theoretical models of parsing •Most can be divided into either two-stage (serial processing) models or one-stage (parallel processing) models

Aphasia

• A multimodal deficit in the communicative modalities of listening, speaking, reading and writing • Central language disorder. One location of neural damage not four. We do not learn one grammar for listening, one for reading, one for .... • Language impairment when all other intellectual, motor and sensory functions are intact. Distinct from IQ, neuromuscular or hearing difficulties • Caused by brain damage - stroke, brain tumour, traumatic brain injury.... • Broadly divided into fluent (including Wernicke's aphasia) and nonfluent (including Broca's aphasia) on the basis of neuropathology and expressive output • Occurs at all levels of communication (single-word, sentence and conversational) • Word finding difficulty (WFD) or anomia is the most consistent feature of aphasia • We have all experienced the "tip-of-the-tongue" phenomenon (?mild form of anomia)

One stage models of parsing

• All sources of information (syntactic and semantic) called constraints are used at the same time to construct a syntactic model of the ambiguous sentence • Most common model: -Constraint-based theory (MacDonald, Pearlmutter & Seidenberg, 1994) -All relevant sources of information or constraints are immediately available to the parser • Unrestricted race model (van Gompel, Pickering & Traxler, 2000) -Combines aspects of constraint based and garden path models • Good-enough representations (Ferreira, Bailey & Ferraro, 2002) -There are limitations with nearly all theories -"The language processor is believed to generate representations of linguistic input that are complete, detailed and accurate" -These beliefs are often wrong

Complex inferences

• Causal inferences are common form of bridging inference -Ken drove to London yesterday -The car kept overheating • Garrod & Terras (2000) described two stages -Bonding (low-level process involving automatic activation of words from preceding sentence) -Resolution (ensure the overall interpretation is consistent with the contextual information) • The extent to which inferences are drawn depends on individual differences

Spoken language common ground

• Common ground - Includes representations of information that is physically co-present, linguistically co-present (just been mentioned) as well as about cultural and/or community membership - Mutual beliefs, expectations and knowledge shared by a speaker and a listener- Getting on the same wavelength - Especially likely to occur in the context of a dialogue involving two or more people (extremely common in the real world) •Communication would be most effective if speakers took full account of listeners' knowledge and the common ground - but this is often too cognitively demanding •In practice speakers generally focus mainly on their own knowledge (rather than the listener's) -Presumably to make their lives easier •In reality, speakers make more use of common ground: -when time is not limited, -when interaction is possible and -when the listener states there is a problem

Anticipatory and Perseveration errors

• Dell et al. (1997a) -Most speech errors are either •Anticipatory: Sounds/words are spoken ahead of their time [a cuff of coffee] •Perseverated: Sounds or words are spoken later than they should have been [beef needles] -Expert speakers plan ahead more than non-expert speakers • Vousden & Maylor (2006) -8- year-olds, 11-year olds & young adults -"simple slender silver slippers" Strengths • Predicts many speech errors that occur -Mixed-error effect: processing can be highly interactive • Notion of spreading activation provides links to other cognitive processes • Plausible mechanisms for monitoring errors Limitations • De-emphasises processes related to the semantics • Not designed to predict time taken to produce spoken words • Interactive processes more apparent in errors than error-free speech • Insufficient focus on the extent of interactive processes

Discourse processing

•Previously, we've looked at single word and sentence processing •Discourse processing (written or spoken) goes beyond this to connected speech/sentences -By its very nature, discourse provides a context that isolated sentences do not •Requires a certain amount of inference drawing -we draw inferences most of the time, even though we're generally unaware of doing so

Jargon aphasia

• Extreme form of fluent aphasia • Severe problems with grammatical processing - both comprehension and production • Severe word finding difficulties • Numerous neologisms • Typically unaware of their errors (poor self- monitoring) so do not attempt to self-correct • But -even those severely impaired can communicate a great deal: anger, delight, puzzlement, surprise, humour... • Seem to speak fairly grammatically • The deficit may occur at a level of phonological encoding immediately after lexical access (Olson et al., 2007) • Inadequate monitoring of own speech (Eaton et al., 2011) • Impaired phonological processing (Harley, 2013) • Phonemes found in neologisms are determined by: -Phonemes present in the target word -Phonemic frequency -Phonemic recency • Difficult to judge grammaticality due to neologistic errors • Some evidence that even neologisms adhere to syntactic processing (e.g. an "-ed" ending to indicate the past tense) Marshall, 2006 • Strong tendency to produce consonants common in the language • Tend to include recently used phonemes -Presumably because they are still activated

Spoken language

• Generally straight forward & effortless • Relatively fast: 2-3 words per second -approximately 150 words per minute -approximately 230 syllables per minute •The reality suggests that, while rapid, it may not be true that speaking requires few processing resources •Use strategies to reduce processing demands -Preformulation •About 70% of our production consists of word combinations we've used before (auctioneers, sports commentators) -Underspecification •simplified expressions, meaning not fully expressed •"or something", "and things like that" • Speech is generally accurate, but imperfect and prone to various errors (~ every 500 sentences) • Study of speech errors is important - Gain insight into how speech production works - Shed light on the extent to which speakers plan ahead • Speech errors "The inner workings of a highly complex system are often revealed by the way in which the system breaks down" (Dell, 1986)

Selective description

• If asked to describe a story we've read we discuss major events and themes and leave out minor details • Therefore our description is highly selective • At a general level, our processing of stories involves relating the information contained in the text to relevant structured knowledge stored in long-term memory

Minimalist approach to inferences

• McKoon and Ratcliff (1992) -Inferences are automatic or strategic (goal directed) -Some automatic inferences establish local coherence -Other automatic inferences rely on information readily available because it is explicitly stated -Strategic inferences are formed in pursuit of the reader's goals; they sometimes serve to produce local coherence -Most elaborative inferences are made at recall rather than during reading

Agrammatism

• Patients with agrammatism tend to: -Produce short sentences containing content words -Lack function words ("the", "in", "and") -Lack inflections (word endings) • Supports the notion that production involves a syntactic level at which the grammatical structure of an utterance is formed • The various symptoms of agrammatism are often not all present in any given patient • Biassou et al (1997) -when reading words aloud patients made more phonological errors on function words than on content words • Beeke et al (2007) -Research context may underestimate grammatical ability -Speaking in more naturalistic settings (at home) is more grammatical

Prosody

• Rhythm, intonation, rate, volume, stress, emphasis • "so, for lunch today he is having either lamb or chicken and fries" • Variation in how much prosodic cues are used • Use does not seem to indicate responsiveness of speaker to listener (i.e. audience design) • Lea (1973) -Ends of sentences were generally signalled by prosodic cues • Keysar and Henley (2002) -Speakers overestimate their listeners' understanding of their use of prosodic cues

Schema

• Schemas are well-integrated packets of knowledge about the world, events, people and actions • Schemas influence comprehension and retrieval processes -Allow us to form expectations -Help us to make the world a more predictable place than it would otherwise be • Mean recall probability for objects as a function of schema consistency -Objects appearing in all five scenes of a given type (e.g. kitchens) were regarded as high in schema-consistency, whereas those appearing in only one were low in schema-consistency •Schema-consistent objects were better recalled than others Strengths • Schematic knowledge helps with text comprehension and general understanding -Also responsible for many memory errors and distortions • Double dissociation in the neuropsychology literature between schema-based and lower-level knowledge impairments Weaknesses • Differences in the definition of "schema" • Schema theories are hard to test • When schemas are used is unclear • How schemas are used is unclear • Schema theories exaggerate how error prone we are • Schemas affect both retrieval and comprehension processes

Anaphoric Resolution

• Simplest form of bridging inference -A pronoun or noun has to be identified with a previously mentioned noun or noun phrase "Fred sold John his lawn mower and then he sold him his garden hose" • Use a bridging inference to recognise that "he" refers to Fred not John • Gender (he/she) and Number (they) information makes resolution of anaphoric inference easier "Juliet sold John her lawn mower and then she sold him her garden hose"

Error detection

• Speakers often successfully detect and rapidly correct their own errors • Comprehension system (Levelt, 1983) -Through a perceptual loop, speakers detect their own errors by listening to themselves and discovering that sometimes it differs from what they intended • Conflict-based account (Nozari et al., 2011) -Error detection relies on information generated by the speech production system itself rather than the comprehension system • Nozari et al. (2011) tested 29 people with aphasia to determine whether comprehension ability or production ability helped to detect errors • No correlation between comprehension and error-detection. But speech production ability did predict error-detection

Two stage Models of parsing

• Stage one uses only syntactic information to process the sentence • Stage two uses semantic information • Of importance is the temporal aspect of the semantic information - when is it actually used in parsing? • Most common model -Garden-path model (Frazier and Rayner, 1982) -So named because listeners/readers can be 'lead up the garden path' by ambiguous sentences

Speech production stages

•Speech production consists of four levels •Semantic level •Syntactic level •Morphological level (morphemes) •Phonological level (phonemes) -We assume that this is the actual order (i.e. we engage in planning, followed by working out grammatical structure and the basic units of meaning, finally working out the sounds) •In fact, it is much less neat and tidy

Speech planning

• The first stage in speech production usually involves some kind of planning • How much forward planning is not clear -Clause (part of a sentence containing subject & verb) -Phrase (group of words expressing single idea) • Level of planning -Clause Participants pause before a new clause (Holmes, 1988) -Phrase Pause longer during complex initial phrase (Martin et al., 2004 •Speakers generally want to start communicating rapidly (implies little forward planning), but they also want to talk fluently (suggests much forward planning) •Flexibility in planning (Ferreira & Swets, 2002) •Speakers faced with a trade-off between avoiding errors and cognitive demands •Extensive planning, but only when cognitive demands are low (e.g. simple sentences)

Speech for communication

• Traditional accounts of language processing are based on monologue - yet most natural form of language is dialogue • Speech comprehension and speech production are interwoven • For most people speech nearly always occurs in conversation in a social context • Importance of partner-specific processing or audience design -Speakers tailor what they say and how they say it to the listener's needs • Requirements of successful communication: Co-operative principle (Grice, 1967) -speakers and listeners must try to be co-operative • Four maxims -Relevance •Speak about things that are relevant to the situation -Quantity •As informative as necessary, but not more so -Quality •The speaker should be truthful -Manner •Make contribution easy to understand •Co-operativeness principle seems reasonable but -It is not clear we need all four maxims (the other three seem to be implied by the maxim of Relevance) -In the real world, some individuals (e.g. second-hand car sales people, politicians) are often guided by self-interest and have been found to ignore one or more maxims • Audience design -Speakers need to take account of specific needs of their listeners (see Common Ground - next slide) • Syntactic priming -Speech tends to follow syntactic structure that has been heard recently (e.g. passive construction) • Gestures -Used to aid understanding and clarification • Prosodic cues -Intonation used to aid meaning • Discourse markers (extra words to help clarify)

Unrestricted Race Model

• Van Gompel, Pickering, & Traxler (2000) -All sources of information are used to identify a syntactic structure, as is assumed by constraint-based models -All other possible syntactic structures are ignored unless the favoured syntactic structure is disconfirmed by subsequent information -If the initially chosen syntactic structure has to be discarded, there is an extensive process of re-analysis before a different syntactic structure is chosen

Sentence comprehension

• Well documented that following damage to the dominant hemisphere many people have an impairment in using the syntactic structure of a sentence to derive its meaning (Caramazza and Zurif, 1976; Caplan and Hildrebrandt, 1988; Grodzinsky, 1990; Tyler, 1985) • Many of these people fail to understand the meaning of a sentence even though they understand the individual words • Possible explanations have included -Lost ability to use parsing system in its entirety -Aspects of parsing system which are damaged -Damage restricted to certain types of neurological damage • Cognitive neuroscience and event-related potentials have been used to help our understanding of sentence comprehension Hagoort, Hald, Bastiaansen & Petersson (2004) •Measured the N400 component in the ERP waveform. The N400 is smaller when there's a match between meaning and sentence context - so it is very sensitive to semantic processing •30 participants •Stimulus sentences -The Dutch trains are yellow and very crowded [true] -The Dutch trains are sour and very crowded [false - 'word'] -The Dutch trains are white and very crowded [false - 'world'] • The N400 response to the critical word in a correct sentence , an incorrect sentence on the basis of world knowledge, and an incorrect sentence on the basis of word meanings. The N400 response was very similar with both incorrect sentences

Discourse markers

• Words or phrases that assist communication even though only indirectly relevant to speaker's message • "well", "you know", "but anyway" • Don't contribute to content, but can show politeness •Flowerdew and Tauroza (1995) -Participants understood a videotaped lecture better with discourse markers than without them •They help with focus or topic shifts -"anyway", "be that as it may"

Cohort Model (Marslen-Wilson, 1980)

• Words similar to what is heard are activated "word-initial cohort" • Words eliminated if do not match further information such as semantic or other context • Processing continues until all other possibilities are eliminated "recognition point" "uniqueness point" • Lexical, syntactic, semantic information processed in parallel Word monitoring task •Identify pre-specified target word in a sentence •Evidence for when the 'uniqueness point' occurs and word is correctly identified •Words versus nonwords e.g. apple vs. fapple •Lexical decision task - is it a word? •The later the position of the key phoneme, the longer it took to decide Revised Cohort Model two key changes • Candidate words vary their level of activation e.g., all words with similar initial phonemes activated, range of activations versus 'all-or-nothing' effect • Context affects word recognition at a fairly late stage of processing (original model said this occurred early) • Revised model places stronger emphasis on bottom-up than top-down processing • Supporting evidence from cross-modal priming -Listen to speech -Perform lexical decision task (e.g. real word versus nonword?) -Only words activated by the speech input will "prime" the response (i.e., make reaction time faster) • Part of recorded word presented e.g., 'cap ' (activates different possibilities: captain, capital, caption, capture, etc.) • Look at visual string of letters (target) 'boat', 'money' presented at different times during the sentence • Perform LDT for visual target - fastest when heard word related in meaning to seen word Facilitation effect for target visual probes in each of the three contexts, even at a late stage in word processing. Results consistent with Revised Cohort Model as the facilitating effects of context were most evident in a later stage of word processing Sentence context can influence spoken word before a word's recognition/ uniqueness point is reached. Strengths • Processing of competitor words is correct • Processing of words is sequential • Uniqueness point is important • Context effects occur at integration stage Weaknesses • Context can affect word processing earlier • De-emphasises the role of word meaning • Ignores the fact that prediction of upcoming speech is important

Anomia

•An impaired ability to name objects -Errors based on meaning -Errors based on phonology -According to WEAVER++ occurs at lemma selection •Lambon Ralph et al. (2002) -Data on anomic patients do not support a two-stage model -Argued that extent of anomia is better predicted by general semantic and phonological impairments

Bartlett's Schema theory

•Bartlett (1932) -Schemas help story comprehension -Rationalisation •People make comprehension errors to make the story fit expectations -Memory for precise details is forgotten -Memory for schematic information is not -Rationalisation errors should increase in number at longer retention intervals

Constructions approach inferences

•Bransford et al (1972) •Readers typically construct a relatively complete "mental model" of the situation and events referred to in the text •Comprehension typically requires our active involvement to supply information not explicitly contained in the text

Constraint based theory

•Competing analyses of the sentence are activated at the same time and ranked according to the strength of activation •The syntactic structure receiving most support from the various constraints is highly activated •Other syntactic structures are less activated • The processing system uses four language characteristics to resolve ambiguities -Grammatical knowledge constrains possible sentence interpretations -Various forms of information associated with a given word interact with each other -A word may be less ambiguous in some ways than others (e.g. tense but not grammatical category) -The permissible interpretations differ according to past experience • Semantic constraints favouring the wrong structure are greater in the first sentence. Make it harder for the reader to change their incorrect syntactic analysis when it needs to be abandoned (i.e. when "amused" is reached) • Eye fixations at verb and post-verb positions were longer for sentence one • Verbs are an important constraint that can influence parsing • Many verbs can occur within various syntactic structures, but are more commonly found in some structures than others e.g. "to read" [usually followed by direct object] "the professor read the newspaper during his break" "the professor read the newspaper had been destroyed" [a sentence complement] • This is called verb bias • Wilson & Garnsey (2009) found readers resolved ambiguities and identified correct syntactic structure more readily when the structure was consistent with verb bias than not • This is contrary to the garden path model Strengths • Using all relevant information to determine structure would be efficient • Evidence suggests semantic information is used very early • Assumes some flexibility in parsing decisions Limitations • Fails to make precise predictions about parsing • Many effects can be explained by garden-path model

Spreading activation theory

•Dell (1986): Spreading Activation Theory -Processes involved in speech production occur at the same time (parallel processing) -Different kinds of information can be processed together -Therefore the processes involved in speech production are flexible and even chaotic • Dell (1986) and Dell and O'Seaghdha (1991) -A representation is formed at each level -Processing during speech planning occurs at the same time at all four levels -Interactive (such that processes at any level can influence any other level) -Typically more advanced at higher levels -Nodes within a network (many of which correspond to words) vary in activation or energy -When a node is activated, activation or energy spreads from it to other related nodes -"tree" to "plant" -Spreading activation can occur for sounds as well as words • Categorical rules at each level which constrain the categories of items and combination of categories that are acceptable -e.g. categorical rules at the syntactic level specify the syntactic categories of items within a sentence (nouns, verbs, adjectives etc) • Insertion rules select the items for inclusion at each level (according to certain criteria) -The most highly activated node belonging to the category is chosen •After an item has been selected, its activation level immediately returns to zero, preventing it from being selected repeatedly •Accounts for most speech errors •Predicts rather more errors than are actually found Five errors predicted by the theory 1. Mixed-error effect -incorrect word is semantically & phonemically related to target (e.g. "let's stop" → "let's start") -Lexical neighbourhood 2. Lexical bias effect -Errors tend to be real words rather than nonwords 3. Anticipatory errors -The sound is spoken earlier in sentence than expected (e.g. "the Tanadian from Toronto" cf "Canadian") 4. Anticipation errors → exchange errors -Two words within sentence are swapped (e.g. "I must send a wife to my email"). Remember that activation returns to zero once selected, so "wife" unlikely to be selected in its correct place once used 5. Errors involve words within short distance - Those words relevant to the part of the sentence under current consideration are generally more activated than those relevant to more distant part of sentence

Good enough representations

•Ferreira, Bailey, & Ferraro (2002) -Readers can misinterpret sentences -We use heuristics to simplify task of understanding sentences •NVN (noun verb noun) strategy -assumes that subject of a sentence is the agent of some action

Gesture

•Frequently used by speakers •Co-ordinated in timing and meaning with the words being spoken •Do they serve a communicative function (e.g. visual cues)? •Why do people gesture when they are talking on the telephone and the listener cannot see them? [Gerwing & Allison; 2011] -Why do people blind from birth use gestures (when speaking to blind listeners? •Holler & Wilkin (2011) compared gestures before and after feedback -The number of gestures reduced when listener indicated understanding of what had been said -Feedback encouraging clarification, elaboration or correction was followed by more precise, larger or more prominent gestures

Writing to communicate

•Important topic, but not separate from other cognitive activities •Involves retrieving and organising information stored in long-term memory •Many theorists suggest writing is a form of thinking (due to the complex thought processes involved) •Involves several processes, but some disagreement about nature/number of processes • The three key writing processes cannot easily be separated -Planning (producing ideas and organising them for goals) -Sentence generation (actual production of sentences) -Revision (evaluation of individual words & overall structure) •Key processes -Planning (approx 30% of the time) -Sentence generation (approx 50% of the time) -Revision (approx 20% of the time) •Writers don't always know how much time is allocated to each process •There is a natural sequence through the processes, but writers often deviate from this if they see a problem in a draft (before completing the writing)

Writing Key processes

•Key processes (according to Chenoweth & Hayes, 2003) -Proposer •Ideas for expression and engaged in high-level processes of planning -Translator •Converts message to word strings (e.g. sentences) -Transcriber •Converts word strings into text -Evaluator/reviser •Monitors and evaluates production

Levelt's WEAVER++

•Levelt et al (1999): WEAVER++ •Word-form Encoding by Activation and VERification •Computational model •Focuses on production of words not sentences • Mainly designed to show how word production proceeds from meaning to sound •Feedforward activation-spreading •Discrete model Three main levels -Nodes representing lexical concepts -Nodes representing lemmas (representation of words) •if you know the meaning of a word and that it is a noun, but you don't know its pronunciation, you have accessed the lemma •tip-of-the-tongue -Nodes representing word forms (morphemes) The model is complex • There is a stage of lexical selection at which a lemma (representing word meaning and syntax) is selected. A given lemma is selected because it is more activated than other lemmas • Then there is morphological encoding during which the basic word form of the lemma is selected • Then phonological encoding during which the word's syllables are computed (known as lexicalisation) Indefrey (2011) •Meta-analysis of studies for assessing brain activity (only picture naming though) •Approximate timings in column on the right -Conceptual preparation 200 ms -Lemma retrieval about 75 ms -Phonological code retrieval 20 ms per phoneme and 50-55 ms per syllable -Phonetic encoding with articulation 600 ms after the initiation of processing Strengths: • Some studies (Biedermann et al., 2008; Turennout et al., 1998) have found that speakers do have access to syntactic and semantic information about words before access to phonological information • The theory has shifted the balance away from speech errors and towards timing of word-production processes • It is a simple (?) and elegant model that makes testable predictions Weaknesses •Narrow focus (single word production) •In reality there is more interaction between processing stages than the model assumes (i.e. not so discrete) •Evidence suggests more parallel processing than model allows for (e.g. word and sound-exchange errors)

Speech errors

•Spoonerism -"you have hissed all my mystery lectures" -Reverend William Archibald Spooner (1844-1930) •Freudian slip -"goxi furl", "bine foddy" [female experimenter who was attractive, personable, very provocatively attired, seductive in behaviour] •Semantic substitution -"Where is my tennis bat?" -Substitution usually the same grammatical class •Morpheme-exchange error -"he has already trunked two packs" -Inflections or suffixes remain in place but attached to wrong word -Positioning of inflections separate process? •Word-exchange error -"I must let the house out of the cat" -Typically further apart in the sentence than sound-exchange errors -Provide evidence for some forward planning •Number-agreement error -"the government have made a mess of things" -"the government has made a mess of things" -Singular verbs are mistakenly used with plural subjects (or vice versa)

Construction Integration model

•Story comprehension involves forming propositions •a statement making an assertion or denial which can be true or false •The greater number of propositions the longer it takes to read •Sentences processed proposition by proposition regardless of the number of words in each proposition

Grammar

•The system and structure of a language (including syntax and morphology) •The way(s) in which words are combined • A set of rules is commonly referred to as a grammar • An infinite number of sentences is possible in any language -We need a set of rules to govern these possibilities • Sentences are nevertheless systematic and organised •Rules to take account of the productivity and the regularity of language Chomsky (1957, 1959) •A grammar should be able to generate all permissible sentences in a given language "Matthew is likely to leave" •At the same time, reject all unacceptable sentences "Matthew is probable to leave"

Comprehending sentences

•Two main levels of analysis in comprehension of sentences: -Pragmatics • analysis of how language is used to communicate -Parsing •analysis of the syntactic (grammatical) structure of the sentence

Writing expertise

•Why are some writers more skilful than others? -Practice, practice, practice •Individual differences probably depend on planning and revision processes •Two strategies used in planning stage -knowledge-telling strategy •Writing down everything known about a topic with minimal planning -knowledge-transforming strategy •Occurs with increased writing ability. Achieving the goals of the writing task •Really expert writers attain the knowledge-crafting stage -this is when we focus on the reader •Knowledge effect - the tendency to assume that other people share the knowledge we possess •Hayes & Bajzek (2008) found that individuals familiar with technical terms greatly overestimate the knowledge other people have of those terms

Event-indexing Model

•Zwaan and Radvansky (1998) -Readers monitor five aspects •The protagonist: -The central character or actor in the present event compared to the previous event •Temporality: -The relationship between the times at which the present and previous events occurred •Causality: -The causal relationship of the current event to the previous event •Spatiality: -The relationship between the spatial setting of the current event and that of the previous event •Intentionality: -The relationship between the character's goals and the present event •Readers continually update the situation model to accurately reflect the information presented (with respect to all five indexes) •Unexpected changes in any of the five requires more processing effort than when all five remain the same •Assumed that the five are monitored independently of each other • Event-segmentation theory (Zwaan & Madden, 2004) -Incremental updating of individual situational dimensions -Global updating of situational models

Wernicke's Aphasia

•fluent/sensory aphasia •fluent (or hyperfluent) speech •normal melodic line •sentences can be long, complex grammar •severe WFD, circumlocution++ •semantic paraphasias, neologisms •poor comprehension •not necessarily damage to Wernicke's Area

Broca's Aphasia

•nonfluent/motor aphasia •slow, halting speech (dysfluent) •may have presence of agrammatism -Reduction of phrase length -Reduction of syntactic complexity: 'telegraphic' -Omission of function words and grammatical morphemes •disruption of prosody/melodic line •relatively preserved comprehension •not necessarily damage to Broca's Area •Finding that people with Broca's aphasia have greater difficulty speaking grammatically is language specific


संबंधित स्टडी सेट्स

Chapter 22 The Child with Gastrointestinal Dysfunction Hockenberry

View Set

Medical Genetics and Epigenetics - Ch. 1 - 4

View Set

Honors American History I: Chapter 6 Test-Pt.1

View Set

Brunner-Suddarth Med-Surg 13th Ed. Ch. 69

View Set

Module 7: Theory-based Instructional Strategies

View Set