FINAL EXAM

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

mondegreens

"Slips of the ear" that result in errors of word segmentation.

Three steps are involved in planning for speaking

(1) Figure out the overall meaning we want to express (2) Figure out what words to use and what sentence structure to put them in (3) Implement this plan into actual speech There might be some time gap between these processes

Merrill Garrett (1980): Speech Preperation Model

(1) Formulate a conceptual representation of the content of the sentence. (2) When speakers want to speak, they first think of the words they want to use (lemmas), which include their meanings and grammar rules. Then, they choose how to put those words together into a sentence. (3) When people form words, they first decide which parts of the word they want to use (like the stem or plural form), and then they put those parts into a word frame that needs those specific parts. After that, they figure out how to say the word using the specific sounds. (4) Finally, the detailed sound plan is implemented and actually pronounced

Model building for semantic priming

(1) the existence of links that connect representations to one another (2) what happens along these links (that is, do they spread activation or suppress it, and how do these effects dissipate over time?)

Two ways in which context can help word recognition.

(A) Activation flows from the bottom up, from phonemic units to words and in turn to semantic features. Both meanings of the word bank are equally active until contextual information is recruited to select the most appropriate meaning. context: Sarah got her check xxx financial institution--work--land next to the water |________| | V V Bank Bank Sounds: Phonemes match both words multiple meanings do in fact routinely flare up in the mind, even if we ultimately have a way of picking out the most "useful" one (B) Context can generate more expectation for some meanings than others by "preactivating" some semantic features. context: Sarah got her check These are preactivated vvv !!! !!! financial institution--work-xxx-land next to the water |________| | v v Bank Bank Sounds: Phonemes match both words

alphabetic inventory

A collection of orthographic symbols that map onto individual sounds or phonemes. (English)

Length and frequency effect

A short high frequency word (cat) from a dense neighborhood (bat, nap, tack, lack, mat, rat) recognized faster then a longer, low frequency word (bluster) from a sparse neighborhood (muster, fluster)

James McClelland and Jeff Elman (1986): TRACE model

A system to preserve the largescale competition effects that are part of word recognition but more resilient against processing disruptions. In their model: Streams of sound input are continuously fed into the word recognition system and activate the words that contain them, without any need for identification of the left edges of words Dan or last dance could be predicted with sound "astdan" With cohort model it could not predict anything from the sound "astdan" because it missed the first part Words that have sounds that overlap with the target word in the middles or ends (beaker & speaker) should become activated as well. Using the semantic priming paradigm, TRACE would predict that the word beaker should speed up recognition times for a word like music (via speaker) as well as insect (via beetle) compared with some unrelated word

lemma

Abstract mental representation of a word containing information about its meaning and syntactic category, but not about its sounds.

age-of-acquisition effect

Age that you acquire a word affects how quickly you recognize it with earlier-acquired words being faster than later-acquired words even if both are equally frequent

Errors involving sound units

Anticipations: A sound is mistakenly produced too early: reek-long race (Intended: week-long race) alsho share (Intended: also share) Perseverations: An already pronounced sound is mistakenly produced again John gave the goy (Intended: gave the boy) black bloxes (Intended: black boxes) Exchanges of two sound units: the nipper is zarrow (Intended: zipper is narrow) fash and tickle (Intended: fish and tackle)

Steve Piantadosi and his colleagues (2012): Ambiguity in language

Argued that not only do languages not "care" about avoiding ambiguity, they actively seek it out because ambiguity actually makes a language more effective. Ambiguity in language doesn't rlly cause big problems for understanding, bc people use context to figure out what's meant. Ambiguity is helpful because ppl can use shorter and simpler words instead of more complicated ones. Because ambiguity doesn't usually cause serious misunderstandings, speakers can take advantage of it to reduce the effort it takes to produce language. Throughout various languages most times the most common ambiguous words were simple and short. Not using unusual combinations

Lila Gleitman and her colleagues (2007)

Briefly flickered a black square in the location of one of the characters just before the visual scene appeared on the screen Though subjects were unaware of the square's appearance, this trick succeeded in pulling their gaze to the targeted character about 75 % of the time. Instead of surveying the whole scene for about half a second before landing on one character and then naming it, participants in the Gleitman study tended to lock onto the targeted character immediately and continue gazing at it until they began to name it. This effected how they conceptualized the scene and described it attention ----> the dog in the scene Description: "The dog is chasing the man." attention ---> man Description: "The man is running away from the dog." Explanation: If people are strongly focused on one participant in the event they're describing, they retrieve that word early in the planning process ----> commits them to a certain word order, which can affect the overall message plan Message planning is a coordinated process with linguistic planning. If linguistic planning gets ahead of message planning, adjustments may need to be made to the message.

Distributed representation

Bundles of sound units connect directly to bundles of semantic features, without any intervening word nodes. [Sound units: /dog/] --> [Semantic features: Furry, Four-legged, Domesticated, etc.] [Sound units: /cat/] --> [Semantic features: Feline, Carnivore, Independent, etc.] Even though different phonemics (feline vs cat) are activated because of their semantic relation they're still present but do not get in the way of activation

Boulenger and colleagues (2008): Parkinson's and word access with movement

Compared with other subjects ppl w/ Parkinson's showed reduced priming for action words but showed normal priming for words that don't evoke actions When these subjects were treated with medication that improves the brain functioning in motor areas, they showed normal priming for action words.

Michael Spivey and Viorica Marian (1999)

Designed a simple eye-tracking study with cross-language cohort competitor Cohort Display Target Spoken Word: English word marker / Russian word marka Cross-language cohort competitors: A stamp (/marka/ in Russian) OR a marker if word was in Russian and ruler (/liñejka/ in Russian) Compared eye movement patterns for these "cohort" displays with "control versions of the same displays in which none of the objects in the display had names in either language that overlapped with the target word Results: *Heard in Russian* Strong evidence of competition from the English cohort competitor. *Heard in English* Wasn't a clear effect of cross-language competition in the other direction evidently in some cases, cross-language competition can be muted or absent Further experimentation suggested that the degree of cross-language activation could be dialed up or down, depending on a number factors: (1) Which language is the first or dominant language (2) If the experimental setting is a bilingual one or a purely monolingual interaction. These studies demonstrate that languages are not isolated from each other during daily use and can influence each other. In some cases, cross-talk between languages can be useful, such as with cognate words: Night (English) Nacht (German), nag (Afrikaans), natt (Swedish, Norwegian), nat (Danish), nátt (Faroese), notte (Italian), and noche (Spanish) In other cases, overlapping sounds in words across languages may be coincidental, making word recognition harder and requiring more effort to suppress irrelevant words.

eye movement studies (TRACE model)

Evidence of activation of rhyme competitors however the mismatch at the beginning of the word will outweigh and suppress the rhyme As the word does sound more like the Rhyme competitor there will be a surge at the end but it wont overtake because of the mismatch at the beginning

Sign language: Learning in stages of life

Examined brain activity during word recognition by deaf subjects who were first exposed to sign as a kid vs early adulthood. Results: Adults showed bilateral patterns of activity that are typically found in young children during lexical processing rather than the mostly left-hemisphere activity found in more mature brains. (occurred even in a adult who spent 30 years in the signing community) But, people who learned a sign language late in life, but were born hearing and exposed to spoken language from birth, showed normal patterns of brain activity even for signs.

Zenzi Griffin and Kay Bock (2000)

Examined how their participants' eyes roamed over images as they prepared to describe them Task #1 Examined a series of simple scenes & upon seeing each scene they were instructed to prepare to describe it They pressed a button when they were ready to speak and the scene disappeared, so that they couldn't look at the scene while speaking. Task #2 Told to examine the same scenes and describe them while continuing to view the images The comparison between Task #1 and Task #2 allowed the researchers to separate the pattern of eye movements that accompany speaking from those that accompany thinking before speaking. Task #3 Asked to identify the "victim" in the scene and press a button when they were ready to answer this question. Each image showed someone doing something to someone else (shooting, kicking, spraying water) This task forced participants to analyze the scene and make a decision about who was doing what to who, but without having to speak Result: People's eye movements provided strong evidence that they had analyzed the full scene and identified the relationship between the characters before beginning to speak. Task #1: They looked at both of the characters in the scene in turn, and within about half a second, their eye movements reliably settled on the character that eventually appeared as the subject of the sentence. Task #2: People didn't immediately describe the first character they saw; first looked at both characters and formed a mental template of what was happening in the scene; Before speaking, they looked back and forth between both characters for about 400 ms; Then they focused on the character that would be the subject of the sentence, and named it; Finally, they shifted their attention to the other character, which was named as the object of the sentence. Task #3: Their eyes settled on the victim before they answered (This took same amount of time as eye movement in Task #1) People looked steadily at each of the characters for roughly a second before naming it— about the same amount of time that participants usually take to name an object in a simple picture-naming task. T Inferred that participants 1. formulated a conceptual plan that involved both characters and the relation between them 2. figured out which character was going to be the subject of the sentence 3. began to retrieve the name for that character, with this last process taking about a second. A well-organized message plan appeared to be in place before word retrieval began.

Errors involving morpheme units

Exchanges: Morphemes are switched, rather than entire complex words nerve of a vergeous breakdown (Intended: verge of a nervous breakdown) already trunked two packs (Intended: packed two trunks)

Excitatory connections

Excitatory Connections: Activation is passed from one unit to another. The more active the unit gets more it increases activation of a connected unit

other examples of cohort competitors and semantics with eye movement

Eye movements reveal that both cohort competitors and semantically related words are activated shortly after the beginning of a spoken word. Spoken word: Hammock Eye track: Nail, Hammock Why?: The cohort competitor of hammock is hammer which has a semantic relationship with nail The effects of frequency and neighborhood density are also noticeable.

facilitation vs inhibition

Facilitation: Processes that make it easier for word recognition to be completed Inhibition: Processes that result in word recognition becoming more difficult.

Tanya Stivers (2009)

Findings: In conversations, people usually take a similar amount of time to respond after someone finishes speaking. Looked at ten languages across five continents, studying speakers from various diverse groups and for majority of these languages, average response The time between the end of a question and the start of the answer is usually about 200 milliseconds. This shows that people start planning their response before the question ends. Even those who respond slowly, like the Danes, only lagged behind by one syllable. We may think that cultures differ greatly in the length of their conversational pauses Maybe one reason for this is that some languages seem to tolerate longer pauses more easily than others. While the average response time for Danes was around 469 milliseconds, some pauses in Danish conversations lasted for over 1.5 seconds, whereas such lengthy pauses were very rare in Korean or Dutch conversations.

Marslen-Wilson and his colleagues: Crossmodal priming tasks

Found parallel activation of multiple cohort competitors based on partial words. (cap-) People were fast in a lexical decision task to respond to words like ship or jail Even in the middle of a word, there was evidence for the activation of words that were semantically related to cohort candidates. suggested that language processing is incremental, meaning that listeners generate hypotheses about the meaning of what is being said and revise them as new information is received.

Stefan Gries (2005)

Gries was able to study 3,003 examples in his corpus Found evidence that certain structures were more likely to occur if another sentence of the same structure had occurred prior. The strength of syntactic priming effect that depended on specific structures found in Bock's study Gries also observed in his corpus Noticed that the size of the priming effect varied depending on the specific verb used in the sentence. Ex. Verb: Give Primed: Double-Object Structure: He gave me a gift. Chosen: Prepositional Structure: He gave a gift to me. Some verbs had a strong preference for the double-object or prepositional structure, and resisted being pulled away from their preferred structure even if an earlier sentence had used the opposite structure. The priming effect was strongest for those verbs that showed no particular bias for one structure over the other. This shows that verb-specific knowledge plays an important role in guiding the choices that speakers make during sentence production, interacting with other constraints. Without the benefit of corpus data, researchers would just be guessing about the extent of a verb's bias toward one structure or the other, and it would take quite a few experiments to test the range of verbs that are easily pulled out of an existing corpus.

Semantic priming

Hearing or reading a word partially activates other words that are related in meaning to that word, making the related words easier to recognize in subsequent encounters. Hear: (Apple) Activates: (Orange, Fruit, Pie, Food, Healthy)

1996 paper by Antje Meyer

Idea: words are connected to other words in complex networks, reflecting similarities of sound or meaning. On the comprehension side, We saw that these connections could often speed up the word recognition process as a result of spreading activation But lexical connections could also slow down the recognition of a word, by virtue of inhibitory links from competitors that also became activated

The difficulty of semantic priming

If the process of retrieving a word partially lights up other words these words could get in the way of picking out the right word

William Marslen-Wilson (1987): Cohort model

In his cohort model of word recognition, he suggested that lexical activation starts right after the beginning of the word with multiple competitors becoming active. As more sound input comes, the candidate list gets smaller until the uniqueness point that choses the only one possible match output: be-- beaker? beetle? Because? Before? output: bea- beaker? beetle? uniqueness point --> beak thinks --------> beaker

Why is the English alphabet like that lol

In the early stages of spelling, there was a lack of consistency in spelling because writing skills were still evolving and there was no central authority to enforce standardized spelling. Different dialects of a language pronounced words differently, which made things complicated. This led to unstable spellings similar to those of a first grader who has just learned to write. Spoken language changes over time but Written language is more conservative . Spelling reform can happen, even on a large scale, but there has to be some single accepted authority that drives the change global nature of English raises another set of issues when it comes to spelling reform: whose variety of English should become the basis for the written language? Even within North America alone, there is a great deal of variation, with differences among regional dialects apparently growing over time, rather than shrinking The global use of English raises a challenge in choosing a standardized spelling reform. The question arises as to whose variety of English should be used as the basis for the written language. In North there are many regional dialects, and increasing differences between them. If every English variety were allowed to change spellings democratically to maximize transparency, it could make communication difficult between people from different dialect groups.

Susan Duffy, Robin Morris, and Keith Rayner (1988):

In the sentence "Although it was by far the largest building in town, the mint/jail was seldom mentioned," the word "mint" is used in an ambiguous way, meaning it can refer to a place where money is produced or a herb with a strong smell and taste. When the sentence was presented with a context that favored the less common meaning of "mint" (i.e., the herb), participants read it more slowly than a control word like "jail." However, when the words were equally biased in frequency between the two meanings, as in the case of words like "pitcher" or "straw," and when the context favored one of their meanings, people spent no more time reading these ambiguous words than unambiguous control words. This suggests that both the frequency of meanings and contextual expectations can affect the activation levels of word representations.

Bock (1986): Semantic priming for sentence structuring

Increased the accessibility of words like church or lightning by presenting semantically related primes (either worship or thunder). Received prime: worship More likely to start sentence with church first resulting in more passive sentences: "The Church was hit by lightening" Received prime: Thunder More likely to start sentence with lightening resulting in an active sentence: "The lightening hit the Church" This study shows that making a word linguistically or conceptually accessible has consequences for syntactic formulation.

Amy Beth Warriner and Karin Humphreys (2008)

Induced tip-of-the-tongue (TOT) states in people for varying lengths of time. Used a computer to display definitions for obscure words and had their subjects press a button if they found themselves in a TOT state. A computer program randomly released the subjects from their word-finding agony in either 10 - 30 seconds by displaying the correct word. Also given the option to press a button if the word leaped into their minds without help. Two days later, the subjects came back to repeat the whole ordeal. Result: When given more time to remember (30 secs) more likely to not remember the next time When given less time (10 secs) they are more likely to recall the info If they're able to remember the information on their own they were also less likely to forget it later. Effects of inability to remember was even stronger if they couldn't come up with the word and needed to be told Researchers think that spending too much time in a TOT state reinforces the bad pattern of forgetting, making it more likely to happen again.

inhibitory connections

Inhibitory Connections: Connections that lower the activation of connected units, so that the more active a unit becomes, the more it suppresses activation of a unit it is linked to.

James Magnuson and colleagues (2007)

It would feel deeply weird to read a word that appeared one or two letters at a time from left to right, and this intuition is confirmed by studies that look at where people focus their gaze while reading. Rather than scanning the word left to right, their gaze lands somewhere within the word, and they can usually read the entire word from that position

Jill Morford (2011): Sign Language Findings

Late signers needed more info than native signers to correctly identify signs in a gating task relied more heavily on information about handshape, whereas native signers privileged information about location.

How to prevent conscious association

Masked priming; Prime word presented subliminally (too quickly to me consciously recognized Shorten times in between (make sure it's long enough so they aren't making a shallow guess; that looks like a word so I just will say it is) Lessen the amount of associative words; include more non-words and non associated words

Mixed error and supercompetitors

Mixed errors are more likely when similarities of both sound and meaning is especially high. Switching cat <--> rat more common then cat <--> dog cat <--> vat Even though dog is semantically similar and vat is phonologically similar, rat is more common because it's similar in both ways A model with no lexical level does agree that this type of swapping can happen lemma selection: (cat → dog) (cat → rat) sound-based substitution: (cat → vat) but it suggests they all have the same amount of risk of happening because the errors only come from these two levels Mixed errors are more frequently then singular errors: Let's start. (Intended: stop) I miss my rat. (Intended: cat) Look it up in the dictionary. (Intended: directory) That's a psychological (Intended: phonological) issue. Many of us were in Rome at the president's (Intended: pope's) funeral. A disproportionate number of mixed errors is in fact predicted by a model in which activation from sounds can flow back up to words and influence lexical selection. Here's why: Semantically based substitution dog --> cat because during lemma selection and Sound-based substitution like cat → vat, the sound /v/ competes with /k/ as the first consonant of the word. In both cases. The effects of competition are restricted to the level where the competition originated, and there's no cross-talk between levels. Why: cat <---> rat rat gets an extra boost because it is semantically and phonologically related both dog and vat are restricted to one category while rat is present in both giving /r/ a bigger boost then /v/ What happens with cat → rat is more complicated. At the lemma level, rat is activated because of its semantic similarity to the target other activated words: dog, mouse etc.. All of these competing words send some activation down to the sound level with /k/, /æ/, /t/ also activation of the sounds /d/, /g/, /r/, /m/ (dog, rat, mouse) Since activation is allowed to bounce back up to the word level sounds:/æ/ and /t/ will activate cat and rat so rat will get an extra boost of activation, which will cycle back down to its component sounds. The lemma rat becomes more activated than it could be based solely on its semantic relationship to cat because of the flow back and forth between levels, activation from two separate levels becomes amplified rat becomes superactivated as a result of the activation buzzing back and forth between levels when someone is trying to say cat. The prevalence of mixed errors is neatly consistent with a model that involves activation in both directions. But it's consistent with other accounts as well.

neighborhood density effects

More difficult and time-consuming to retrieve a word from memory if the word is similar to many other words in the vocabulary: (sling, fling, sing, ding, swing) vs a few other words: (staunch, stench) Effect can be seen even though no similar words have occurred before the test word suggesting that lexical neighbors can compete for recognition even when they are not present in the immediate context. It's enough that they're simply part of the vocabulary

Some studies idk

One study found that subliminally priming subjects with the brand names of drinks affected their choices only if they were thirsty at the time In some cases, activated information may produce a reverse priming effect, in which priming sets off a behavioral backlash in the opposite direction from what you'd expect based on the activated information Studies like these suggest that our minds use activated information in a strategic way, even if our conscious selves are not aware of how we deploy these strategies.

Lexical decision task: Facilitation

Participants read a string of letters on a screen Words were either actual words (doctor) or non-words (dometer) Press one of two buttons (real word) (non-word) Speed of pressing the button is recorded Results: Response is faster if the word seen before was related to the word following (nurse ---> doctor) then if word prior was unrelated (butter ---> doctor)

Lexical decision task: Inhibition

Participants read a string of letters on a screen Words were either actual words (stiff) or non-words (stirm) Press one of two buttons (real word) (non-word) Speed of pressing the button is recorded Results: Response is slower if the word seen before was closely related to the word following (stiff ---> still)

incremental language processing

People don't wait to hear the whole word before they guess, they start guessing throughout the whole word

syntactic priming

Phenomenon in which speakers are more likely to use a particular structure to express an idea if they had recently used the same structure to express a different idea

Tracking eye movements we see... (Facilitation)

Presented with: Spoken word "hammer" Visual: Hammer, nail, cricket Result: While tracking the paper our eyes are likely to look at the nail briefly then the hammer, ignoring the cricket

Tracking eye movements we see... (Inhibition)

Presented with: word "beaker" Visual: Includes beaker and beetle Result: Recognition of spoken word is slowed, takes people longer to locate the image of beaker

Masked priming

Prime word is presented subliminally, that is, too quickly to be consciously recognized. mask ##### (1,000 ms) prime NURSE (50 ms) mask ##### (500 ms) target DOCTOR Response required—is it a word?

Lexical decision task: What can get in the way of testing?

Problem: Participants might try to anticipate patterns within the experiment in order to respond strategically --> their response times might say less about what people typically do in daily conversation Tactic: Eliminate the expectation that the correct response to the target will be to press the button for "Yes, it's a word" with non words so that the likelihood of the target being a real word is exactly 50% over the course of the experiment Problem: If participants begin to catch on to the fact that the target is often related to the prime they might start approaching the task as word association task: once they see the prime nurse, for example, they might start actively thinking of related words Tactic #1: Include a great number of filler items in which the prime and target aren't related in any way, making it harder to detect the pattern. FREEDOM—METAL WRENCH—BOOK HANDLE—SHOES NURSE—DOCTOR FLOWER—SCREEN PAPER—ROOF vs FREEDOM—METAL WRENCH—HAMMER HANDLE—DOOR NURSE—DOCTOR FLOWER—VASE PAPER—ROOF Tactic #2: Provide participants with as little time as possible to anticipate words that are related to the prime. Do this by shrinking the time between prime and the presentation of the target

Facilitation

Processes that make it easier for word recognition to be completed

Inhibition

Processes that result in word recognition becoming more difficult. (Think inhibit)

Word Recognition in Sigh Language

Recognizing signs has similar problems to spoken word: You need to quickly distinguish individual signs from similar ones. Signs can have neighbors that resemble them in: location, handshape, or movement. Sign language requires segmenting individual signs out of a stream of movements. (identifying end and the beginning) some key differences: Transitions between words in signed languages may be more obvious than in spoken languages. Segmentation problem may be somewhat easier in signed languages: movements that link signs together have different properties. Information about the sounds of spoken words unfolds linearly in time, while the dimensions of handshape, location, and movement overlap a great deal in time in signs. Signs show a greater degree of iconicity, making the relationship between signs and their meanings generally less arbitrary than between spoken words and their meanings. This layer of information may be used to recognize signs.

Motley and Baars (1979)

Recruited male subjects and randomly assigned two groups to two different tests Test #1 Administered by an attractive, provocatively dressed female experimenter. Test #2 Randomly assigned to a condition in which they wore fake electrodes, and were (falsely) told that mild electric shocks might be applied during the experiment. All the subjects did the same SLIP test Read interference sets and target pairs that consisted of nonsense syllables (Sequence: bood tegs, goot negs, goob leks, lood gegs). Some of the target pairs resulted in real words along a "sexy" theme (lood gegs → good legs). Other target pairs might lead to real words of an "electrical" theme (shad bock → bad shock). Found that the subjects who interacted with the seductive experimenter made more "sexy" spoonerisms, Group #2 who were under threat of being zapped made more "electrical" sound exchanges. But sometimes the guys in the "sexy" condition made "electrical" speech errors and vice versa.

D'Angelo & Humphreys, 2015

Retrieving a word on your own helps prevent future Tip-of-the-Tongue (TOT) states for that word People who retrieved a word themselves had less TOT states for that word compared to those who were given the word Time to retrieve the word did not affect this effect Even if phonological clues were needed to retrieve the word, it still helped establish the connection between the lemma and its form Generating the word on your own is important for establishing the right connections between the lemma and its form

ls. Charles Perfetti and his colleagues (2010): FMRI Chinese vs English Reading Comprehension

Reviewed a number of fMRI studies of reading in Chinese and in English Some similarities in brain regions that are involved in reading both languages; rely on a reading network in the brain that connects visual areas with phonological areas. (Even though Chinese is more symbolic the symbols are still very connected to info about the words sounds and meanings) Reading is very localized in the left hemispheres More bilateral activity in the visual areas for Chinese readers. Chinese readers showed activation over a larger frontal area than English readers, but reduced activity in temporal areas that play a role in matching graphemes to phonemes.

Picture-word interference task (part 2)

Showed subjects pictures of two objects side by side Instruction: Describe the pictures using the sentence frame "The X is next to the Y" Picture appeared (or very slightly before or after) Subjects heard the distractor word (either unrelated, or semantically or phonologically related to either the first or the second word) (a) When the distractor was semantically related to the first word: Slower to begin speaking suggesting that they were in the process of retrieving the first word's meaning before starting to speak and were experiencing competition from the distractor. (b) When the distractor was phonologically related to the first word: Faster to begin speaking indicating that they were also filling in the sounds for the first word and got a boost from the related distractor at this point. (c) When the distractor was semantically related to the second word: Slower to begin speaking because they were retrieving the lemma of the second word that would appear later. (d) When the distractor was phonologically related to the second word: No response indicating that people had retrieved the second noun's lemma before speaking, but they were not yet thinking about its sounds, which would come later

Manuel Carreiras (2008): Sign language

Signs that overlap in form with the target compete for lexical access but depends on if overlap involves location or handshape Same locations: Slowed down recognition Same handshape: sped up recognition. Naomi Caselli and Ariel Cohen-Goldberg (2014) argued that a model of spoken word recognition can account for the effects of location and handshape overlap by considering that location is perceived more quickly and easily than handshape. By adjusting the model to give location competitors earlier and stronger activation than handshape competitors, it could replicate the opposite effects of location and handshape overlap.

Localist representation

Sound units connect to word nodes, which in turn connect to semantic features that become turned on when the word node is activated. [Sound units] --> [Word node: "Lion"] --> [Semantic features: Feline, Carnivore, Mane, etc.] [Sound units] --> [Word node: "Tiger"] --> [Semantic features: Feline, Carnivore, Stripes, etc.]

COME BACK TO THIS ONE

Speech errors provide clues about how linguistic information is accessed and assembled in a specific order. By analyzing slips like "a meeting arathon" and "an istory of a hideology," researchers have found that assembling the sounds of content words happens at an earlier stage than filling in the sounds for function words. This means that different kinds of linguistic information are accessed in a specific order when we speak.

tip-of-the-tongue state

State of mind experienced by speakers when they have partially retrieved a word (usually its lemma, and perhaps some of its sound structure) but feel that retrieval of its full phonological form is elusive evidence when people experience a tip-of-the-tongue state, they don't just have the illusion of knowing the word; they really have accessed it, only partially One study of Italian speakers (Vigliocco et al., 1997) showed: they could reliably report the grammatical gender of the nouns that eluded them, bolstering the notion that lemmas are marked for syntactic as well as semantic information

Movement and speech

Studies show that people are faster to respond to a word (pen, knife) while acting like they are using one. Faster at responding to sentences like: He closed the bathroom door if the response requires them to act out the motion Reading the word typewriter speeds up response to piano Take less time to read if they can imagine interacting with it physically (cat vs sun) People with unmedicated Parkinson's show reduced priming for words that involve action People who has visual issues have a difficulty with processing words that we experience visually (birds)

Guy Van Orden and colleagues (1987, 1988),

Subjects had to decide whether words that they saw on a screen were part of a larger category Category: Flower Presented with either Real word: Daisy, Pear, Rows or Non-Word: Roze Two buttons: [Yes] [No] Result: People slowed down for non-words that sounded out like a real word in that category (roze) and for real-word homophones (rows) but orthographically similar words and non-words (robs, rone) did not slow down the response times. Even familiar words were being converted into their sound patterns and causing confusion when they were pronounced the same as the word rose, indicating that the source of the confusion really was at the level of sound.

Kay Bock (1986): Syntactic priming

Subjects heard and had to repeat specific sentences that were used as primes. Showed various sentence structures: Active One of the fans punched a referee. Passive The referee was punched by one of the fans Prepositional A rock star sold some cocaine to an undercover agent a double-object construction A rock star sold an undercover agent some cocaine. After each sentence subjects saw pictures depicting events, and they had to convey what was happening in the picture using a single sentence. This variation in priming sentences was to see if the structure would effect the freely structed sentences that people used to describe pictures they saw after Results: Subjects' choice of syntactic structure was indeed swayed by the syntactic structure that appeared in the prime sentence when participants heard and repeated a passive sentence, they were more likely to produce another passive sentence. This suggests that the structure of the previously heard sentence made the passive structure more accessible in their minds but they did use an active sentence structure active voice more often. Other studies have found that people tend to reuse linguistic structures in their real-world conversations as well. There are cases where people may choose to use a passive structure in their speech simply because it makes more sense based on the situation they are describing.

Paul Allopenna and colleagues (1998): Eye movement and cohorts

Subjects heard spoken instructions: "Pick up the beaker," With visual displays that always had the target object + other objects along with a cohort competitor: beetle, wheel chair, beaker, stroller Result: people were more likely to look at the cohort competitor object (beetle) then unrelated objects; these eye movements were initiated extremely early in the word, based on small amount of phonetic info What this says: People quickly narrowed down the potential matches for the incoming word based on small phonetic cues, sometimes just a sound or two. Eye movements towards the potential matches reduced after the word ended, indicating that the brain had already identified the target as the actual word before the end of the word occurred.

Mark Smith and Linda Wheeldon (1999)

Subjects look at a computer screen displaying pictures of several objects that moved to new positions on the screen Task: Describe which objects had moved where The researchers measured how long it took people to begin speaking depending on how complex the elements of the sentence were. (a) The dog moves up. (b) The dog and the foot move up. Result: People were faster to begin saying sentence (a) than sentence (b). This suggests they needed extra time to prepare the more complex sentence. Smith and Wheeldon also showed an interesting difference between the next two sentences: (c) The dog and the foot move above the kite. (d) The dog moves above the foot and the kite. Result: Sentences c and d are equally complex. But speakers took longer to begin sentence (c) than sentence (d). Why: (c) has a more complex subject phrase "the dog and the foot" (d) which has a simple subject "the dog" This suggests that speakers needed extra time to prepare for the more complex sentence.

(Davis & Herr, 2014): unconscious word reception

Subjects read and rated a travel blog post that ended either "bye bye" (like "buy buy") "so long." Immediately after doing this, in what they were told was an unrelated task subjects asked how much they'd be willing to pay for a specific restaurant package. Result: Those in the "bye bye" condition said they were willing to pay more than those in the "so long" condition. (This effect was only seen if the subjects were given the extra task of remembering a 7-digit number over the course of the experiment The researchers argued that this extra mental load prevented them from suppressing the irrelevant meaning of "bye." backlash: Some researchers see the replication failures as evidence that the original studies were either flawed or products of statistical bias. Others have argued that the effects are probably real, but can be affected by a number of factors that need to be studied in more detail

Picture-word interference task (part 1)

Subjects see pictures of objects with words superimposed on them and are asked to name these objects as quickly as possible while ignoring the words (some variants looked at a picture while a distractor played) (1) Semantically related to the picture the word cat + dog result: people are slower to name (2) The word and picture were unrelated word knee + dog Result: people were quicker to name then (1) (3) phonologically related to the picture word doll + doll Result: people quicker to name then (2) The effect does seem to be about language production rather than about accessing concepts, because you don't see this happen when people are simply asked if they recognize the objects in the picture, but don't have to name them

Errors involving word units

Substitutions: word replaced by one not meant to appear in the sentence: nationalness of rules (Intended: naturalness of rules) I have some additional proposals to hang out (Intended: hand out) chamber maid (Intended: chamber music) Blends: in which two words (often similar in meaning) are fused together I swindged (switched/changed) She's a real swip chick (swinging/hip) A tennis athler (athlete/player) Exchanges: two words trade places: examine the horse of the eyes (Intended: the eyes of the horse) sickle and hammer (Intended: hammer and sickle)

logographic writing system

Symbols are mapped to units of meaning such as morphemes or words rather than to units of sound. (Like Chinese)

the ABC problem

The Alphabetic writing systems is pretty unnatural from an outside view; recognition of distinct sound units is not natural and usually learned "How many sounds are there in the word bed?" "What are you left with if you take the first sound away from the word spring?" "What sounds do the words bag and bed have in common?" (^ hard to answer for illiterate kids and adults) But awareness of words, syllables, or parts of syllables comes more easily. "How many syllables in the word bicycle?" "What sounds do the words fling and spring have in common?" syllable onsets and rimes are more cognitively accessible than individual phonemes onset: material in a syllable that precedes the vowel. rime: material in a syllable that includes the vowel and anything that follows

What do sound errors tell us

The fact that sound-based errors occur within a smaller window than word-based errors suggests that people plan ahead when choosing words, before thinking about their sounds. Means that words can be inserted into a sentence as "word kits" based on their meaning and syntactic category, before being assembled into their sounds. Psycholinguists call these "word kits" lemmas. This process suggests that there are separate stages involved in choosing words and pronouncing them.

Word mix up

The passage is about mistakes that happen when people mix up the order of words in a sentence. This happens when people mix up parts of words, instead of mixing up whole words. someone might say "pinch bummed" "bum pinched." the stress switched to pinch instead of bum The word that was moved around will often be stressed less than it would have been if it was in its original place. People choose the words they want to use before they figure out how to fit them into a sentence. They might mix up the order of the words when they try to put them in the right order. When people change the order of words, it can change how they sound when they are pronounced. word "pinched" sounds different when it's pronounced on its own compared to when it's part of the phrase "bum pinched." These mistakes are made bc they choose the words they want to use before they figure out how to fit them into a sentence. This can affect how the words sound and which words are emphasized when the sentence is spoken.

decay function

The rate that info fades in memory. Info that has become activated gradually returns to a baseline level of activation. [lion] ------> [tiger] ---> [stripes] -x-> [paisley]

Agnieszka Konopka and Antje Meyer (2014)

The researchers tracked their subjects' eye movements to see if they surveyed both characters in the scene or quickly settled on one character before naming them. Found : When participants viewed scenes that were easy to plan, they took more time to survey the entire scene. When participants viewed scenes that were hard to plan, they focused on one character and started planning their message. This indicates that participants could tell within a few hundred milliseconds that their message would take time to form, so they began naming the first character while they continued to plan the rest of their message.

Seidenberg & McClelland (1989): Connectionist model of reading

There are clusters of similiar irregular words "resign" "benign" "align" This suggests there are smaller rules embedded amoung exceptions and that there is a continuum between the most unusual forms and the most regularly patterned what appear to be rules are really just the strongest patterns. So irregularly spelled words have some letter/sound pairings that we can rely on to read them correctly, and there may be smaller rules embedded within these exceptions. The more we encounter certain letters, they activate specific sounds in our mind. If these sounds are frequently activated together the connection between them becomes strong, almost like a rule. For irregular words if the word is frequently used, the connection between its letters and meaning becomes strong, helping us recognize the word more efficiently. This is why we rely more on sound patterns for regular words and more on letter-to-meaning links for irregular words. And the less we encounter a word and words like it, the weaker the connection

Why are writing systems different?

There are cognitive, practical, political, and historical reasons different writing systems are the way they are so writing systems often emerge as combinations of different mapping approaches. Chinese heavily relies on logographic mapping because it was designed to be understood across multiple dialects spoken throughout the vast country. A writing system that maps onto meaning rather than sound can unify multiple dialects or languages, making it a viable cultural export into other linguistic groups.

Sarah Brown-Schmidt and Michael Tanenhaus: Directional Speech Task

Two conversational partners took turns giving each other instructions to click on certain objects in complicated visual arrays of objects. Complex descriptions were necessary in some displays but not in others The researchers conducting the experiment were interested in how participants handled complex noun phrases like "the large circle" "the square with small triangles." Found: Participants were able to fluently produce complex phrases if they registered the crucial information early enough. But if they became aware of the necessary info too late, they would need to backtrack and add the information in a conversational repair

Homophones

Two or more words that have separate, non-overlapping meanings but sound exactly the same (even though they may be spelled differently). bred, bread made, maid side, sighed flea, flee none, nun blew, blue missed, mist main, mane bridal, bridle waste, waist know, no in, inn sun, son stare, stair seen, scene fair, fare

Max Coltheart and colleagues (1993, 2001): the dual route model

Two pathways that link graphemes with meaning: Direct route: Series of orthographic symbols is directly connected with the meaning of a word. How it helps: allows us to handle the messy exceptions that arise over time in a mixed up language Assembled phonology route: Graphemes are "sounded out" against their corresponding sounds, beginning at the left edge of the word. How it helps: allows us to read words that we know from spoken language but haven't yet mastered in print. As people get better at reading it seems they rely less on the phonological route.

John Bargh and colleagues (1996): Word association & action

Undergraduate students formed sentences out of scrambled word lists one set of students seeing lists that contained words associated with the elderly (Florida, wrinkles, bingo, gray) other students got lists of neutral control words. Result: Those who'd been exposed to the words associated with the elderly walked more slowly than those who'd been in the control condition

Karin Humphreys and colleagues (2010)

Used a common lab technique for inducing spoonerisms or sound exchanges: beg pet instead of peg bet Subjects who made speech errors were more likely to repeat the same mistake again in the next few minutes. However, the likelihood of making the same mistake again decreased after 48 hours. This suggests that taking a break from speaking may help reduce the chance of repeating the same mistake.

Aphasia and how it affects speech

When people have aphasia they often have trouble accessing their word knowledge. But doesn't seem like specific groups of words are lost, as would happen if words were stored as whole units. (like in localist) The problem is more general - it affects all words to some extent. It's like the damage affects different parts of all the word representations, instead of just some specific words.

Exchanges of whole words vs sounds

When words are exchanged errors almost always involve words of the same syntactic category: nouns are swapped for nouns verbs are swapped for verbs When sounds are exchanged errors usually occur between adjacent words and involve sounds that come from similar parts of words but don't care about type very unusual to have a sound exchange where a sentence like: Let's daze our glasses to the rear old queen. What they do care about is that affected sounds come from similar parts of words. Beginning syllables hardly ever swap with the ends of syllables Let's raise our glasses to the dear old queen ---> dear old queen doesn't become near old queed Sound exchanges involve similar types of sounds consonants <---x-don't exchange-x--> vowels.

Virginia Woolf

Woolf believed that words are not very good at being useful because they can have so many different meanings and associations. Even a simple sentence like "Passing Russell Square" can evoke many different thoughts and emotions beyond its literal meaning. The word "passing" can suggest change and the passage of time, while "Russell" brings to mind history and the ducal house of Bedford. The word "Square" evokes images of shapes and textures. So, a single sentence can inspire imagination and memories that go beyond what it literally says.

Polysemous words

Words have many different related but different meanings She's got a run in her stocking. There was a run on the banks this week. Sam went out for an early morning run. I'd like to run my fingers through your hair. Let's run through the various options. He's had a run of bad luck. Can you run this over to the post office?

Homographs

Words that are spelled exactly the same but have separate, non-overlapping meanings (and may or may not sound the same) The performer took a deep bow. It's difficult to hunt with a bow and arrow. Jerry is headed down the wrong road. I've really been glad to have my down parka this winter. Silvia is content with her lot in life. The content of this course is difficult.

syllabic writing system

Writing system in which characters represent different syllables. (Japenese)

Duality of patterning and writing

Written language doesn't always exhibit the property of duality of patterning. In spoken languages, sounds that have no intrinsic meaning are combined to form larger, meaningful units like morphemes and words. In logographic writing systems, the smallest units of combination do have intrinsic meaning. But duality of patterning is universal in spoken languages

graphemes

Written symbols, analogous to phonemes in spoken language. Individual graphemes may or may not correspond to individual phonemes (for example, two graphemes are used to represent the sound /k/ in sick).

Excitatory connections vs inhibitory connections

[beam vs beat activation] Once the last sound or letter of beam is perceived, it excites the word beam, but beat will be inhibited

partial words

but instead of giving subjects entire words Present partial words and see whether there's any evidence of priming for words that are related to potential matches to partial words. Recording of the word conform and cutting off the sound file right in the middle of the sound /f/. Statistically, the sound sequence /nf/ is extremely unlikely to correspond to the end of a word, so hearers should be able to guess that the end of the word hasn't occurred yet, based solely on information about the sound patterns of English words. So, if lexical activation is delayed until the ends of words are identified, we wouldn't expect to see priming for any words that are related to conform— for example copy or imitate. On the other hand, if lexical activation is initiated, we'd expect to see priming not only for words related to conform, but also for words related to other possible continuations of this snippet, that is, words semantically related to conflate, confabulate, confuse, confine, confide, conflicted, and so on. Such words, with their overlapping onsets, are known as cohort competitors.

How do these competitive words compete in the real word

evidence of competition: comparing how quickly people retrieve words that have either many or few sound-alikes in the general lexicon, even when those sound-alike words aren't present Ex. stench vs sling

Jessica Nelson (2009): Learning English vs Chinese observed by fMRI

fMRI studies showed: Chinese subjects who learned English as a second language were able to use the same reading networks for both English and Chinese. English speakers who learned to read Chinese showed different patterns of brain activity for the two languages. It may be possible to read English as if it were Chinese through recognizing whole words and matching them to word meanings and not sounding them out But "sounding-out" strategy for reading Chinese doesn't work so English learners of Chinese would have been forced to develop new skills for reading.

facilitation vs inhibition in semantic priming

facilitation: related words are quicker to retrieve inhibition: the more words that light up could get in the way or identifying the right word leading to mix ups

Jean Fox Tree (2001): Word recognition and disfluences

found that words preceded by uh were recognized more quickly than words appearing in a "clean" stream of speech with the disfluency spliced out.

Cohort model suggests...

identifying the ends of words is not necessary when generating potential matches But the model relies on identification of the left edge of words (AKA the beginning) for activation of potential candidates. This means that it is essential to know where the beginning of the word is. This is a flaw because of all the ways language can be disrupted in normal inviroments

How to shorten time without risking participants making shallow decisions

interstimulus interval (ISI): Amount of time between the offset of the prime and the onset of the target. Issue with using a snapshot technique to capture a temporally dynamic process is that response times will be affected by how deeply people process the target words before making a decision. If it's too quick and the non words are distinct from possible words (bgltx, aoitvb) then participants may make shallow decisions. If non-words look like possible words (blacket, snord) then participants will have to process them more deeply before deciding.

David Swinney (1979): Crossmodal Priming Task

tested whether competing meanings of ambiguous words become simultaneously activated even when there's plenty of context. bug: small creature activation: ant, ladybug, worm bug: surveillance device activation: spy, surveillance, recorder Passages were introduced with either a biased or neutral context: 1: Rumor had it that, for years, the government building had been plagued with problems. The man was not surprised when he found several spiders, roaches, and other bugs/insects in the corner of his room. 2: Rumor had it that, for years, the government building had been plagued with problems. The man was not surprised when he found several bugs/insects in the corner of his room. Tested: Immediately after had to respond to test words (ant, spy, sew) presented on a screen by pressing (word) (non-word) Result: Response times were faster to both ant and spy in ambiguous test than they were to the control word sew. With unambiguous test (insects) ant was recognized faster then spy and sew

David Swinney (1979): Crossmodal Priming Task (redone)

word "bugs" was said out loud, another word appeared on screen three syllables later results: response to "ant" was fast while "sew" and "spy" were about the same This suggests that the irrelevant meaning of "bug" (the surveillance device) was briefly activated but quickly suppressed when it was no longer relevant. Based on these results, researchers believe that when we hear a word, our brain activates all possible meanings of that word, at least for a short time. Later research has shown that in some cases, contextually inappropriate meanings never become prominent enough to affect our understanding of the sentence.


Kaugnay na mga set ng pag-aaral

Chapter 8: The Emergence of a Market Economy, 1815-1850 ME

View Set

Exam 3 - Climate Change. Chpt. 11 (Without Debates)

View Set

history first test after fall break

View Set

9.10 - Antifraud Provisions of the USA

View Set