NSCS 320 Exam 2

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

How does it perform compared to humans?

Most likely she did not guess then program to count dot algorithim shows much better response for this particular task

Give an example of Backpropagation

NETtalk is an ANN designed to transcribe written English into spoken English. Feed series of texts and listen to a voice speaking text you give it. Given a written letter, output is pronunciation of letter using speech synthesizer supervised learning giving feedback when pronunciation is correct or incorrect. NETtalks performance after 50 passes through sample text it achieved 95% accuracy. Tested on new sample of data, performed 78% accuracy. Showed generalized its learning to new data.

Connectionist Models review

One of the goals of cognitive modeling is briding the gap between behavioral and neural processes. Nodes are analogus to neurons. Input units light light hitting our retina , Hidden units manipulate that information. Output units behavior

Pattern recognition

One of the main functions of visual system is pattern recognition, enables us to recognize the large object in the street is a car, person next to you is you best friend, and understand the words that you are reading on your scrren

Backpropogation

Output layer response is compared with the actual correct response to the input. This is what makes it supervised learning. The weights are then adjusted at the hidden layers using an error signal. Next time network sees a cat, new weights will produce a different output that is more accurate. Model got it wrong error said dog but it is a cat, supervised learning. Once model gets it wrong thats when back propagation goes into play. Goes back and this time it is correct.

What value of z generates data that best matches the behavioral data?

Parameters 1) drift rate (A) 2) threshold (z) 3) Noise (c) in other words what threshold value is the computer using when it most closely matches participants responses. Could reveal something about how humans set thresholds for responding as well.

Visual perception and object recognition9

Perception is the process by which we gathe information from the outside world via the senses and interpret the information. It is our most sophisticted sense we rely on

CNN's

Pooling layer Down sampling layer reducing computational overhead while maintain the feature maps. Image is now classified with fully connected layers.

Feature integration theory treisman

Pre attentive stage vs focused attentive stage. Pre attentive stage: early stage in pattern recognition that happens automatically effortlessly. It does not require focused attention includes attention to basic features , such as color, motion, orientation, and curvature.

Feature integration theory and visual search ..

Serial Search :The black T does not"pop out" in the display, you have tosearch through each item in thedisplay to find it.• The search is done in serial, meaningthat you have to look at each letterseparately before you can find theletter you are looking for. This is shown in the reaction timedata on the graph, where reactiontime increases with the number ofdistractors in the display.• This shows evidence for the focused attentive phase of feature integration theory, where features of an object must be bound together with conscious attention.

Recognition by components theory...

Shape constancy problem in perception an object may appear in different locations, sizes, orientations, and shapes. Any pattern recogniton system must produce a description of the objeft that can handle these variations. template match and feature will not address object constancy

Did she count the dots

Step 1: count dots N left number of dots moving left n right number of dots moving right Step 2 compare and decide if N left > N right choose left Now lets program a computer to respond based on the rules to test each hypothesis.

Broadbent's filter model

Studies the mechanisms of selective attention There is an attended channel, which they should pay attention to, and an unattended channel, which they should ignore. When asked to repeat the attended channel, they were very accurate.• When asked to repeat the unattended channel, they couldn't remember. This suggests that some information gets "filtered out" through attentional mechanisms.

The Cognitive Neuroscience of ObjectRecognition

The What and Where pathwaysAgnosiaApperceptive AgnosiaAssociative AgnosiaProsopagnosiaNeural Coding

Evaluating feature detection theory ...

Then prediction is updated, sent back down again to compare against the bottom input until a match is reached. A new prediction from top down areas predicts balloons instead Prediction matches bottom up input and recognition occurs.

The Network Approach

There are outputs to each node that transmit information to the next layer. These mirror how neurons fire an AP to communicate to surrounding neurons.

Selective attention

This is a form of attention that can be focused onto one source ofinformation and away from another.For example, look around the room. Focus on something red. Yourattention is focused on a single object, and is taken away fromother objects in the room.

Divided attention

This is a form of attention that can be split or divided amongseveral alternative informational sources.For example, look around the room and focus on an object with thecolor red. At the same time, focus on a sound you can hear. Yourattention is divided between the red object in the room, and thesound that you hear.

Artificial Neural networks ANN's

Typical computers are serial processors they perform one computation at a time. ANN's perform computations in parallel. The items they are representing are stored in the patterns of activations of the network. Discrete units processing then happens 1,2,3 steps at a time.

What is computer vision?

Use neural networks to accomplish tasks such as reconstructing a large scene from individual photographs , tracking a persons walking, detecting faces in a photograph.

Computer vision

Use neural networrks to accomplish tasks such as reconstructing a large scence from inidividual photographs, tracking a person walking, detecting faces in a photograph. Computers cant do the are you a human test as well as humans like pick all the bikes or drag the puzzle piece.

What is computational modeling

Using math and logic to understand behavior. Aspects of experiment : stimuli, order effects, past experience, participant is sick that day --> behavioral and neural data: choices, reaction times, eye movements, EEG mathematical equations help us link aspects of the experiment to behavioral and neural data.

Verbal vs computational descriptions

Verbal descriptions of what we think is happening. Computational modeling uses algorithms and mathematical equations to describe cognitive processes.

How do we teach computers to do this?

We can use neural networks to achieve this. Specifically a convolutional neural network CNN

Goal of computational modeling

We want to capture essential features of cognitive systme so we can better understand and predict behavior.

Theory 2 connections

acquisition of language occurs vial gradual adjustments of connections among simple processing units. mistakes generated through network model is making. Rules can be described in retrospect by linguistics. In reality no actual rules operate during processing of language.

ANN's connections are weighted

all inputs are multiplied with their weights (W) each connection has a weight to it. Noda A activation differs for each node. activation value of 2 weight of output is *= which is passed on to the next layer. Use structure of neurons input and output use map for representation.

Advantages of computational modeling

allows us to make predictions about how a cognitive system works. we can alter paramters of a model and see results, we can test these predictions using experimental studies. These predictions are logically valid can be more difficult to ensure your assumptions are logically valid when we are dealing with verbal descriptions of cognitive processes. These predictions are quanititative and precise allows us to pinpoint exact changes in simulated behavior based on slight changes to parameter values. More difficuly to do with verbal descriptions, allows us to generalize beyound our verbal theory, we can generate new predictions for behavior that go beyound our existing dataset.

How do children learn English past tense?

children learn in a u shape development pattern, children early inflections are correct. They start to make errors over regularization hitted, sleeped, goed, in the end children do recover from these errors.

Theory 1: symbols + a set of rules

children learn this as a set of rules. rules are symbolic discrete and categorical chomsky fodor pinker. They are acquired all of a sudden like an epiphany. Rule 1 verb stem + ed the rule is independent of meaning or frequency of the verb stem for example walk to walked. Rule 2 learn the individual expections to rule one learn to associated go with went. Errors occur due to overuse of the first rule overregularization

Summary

cognitive model is artifical system that behaves similarly to cognitive system. This model simulates, predicts, and explains human behavior. Modles give us insight into how cognition works

Hypotheses We make hypotheses and theories like this all to the time in cognitive silence

did she guess or did she count the dots. How do you know which one is right theory of human memory

Semantic networks

different type of network that does not use distributed coding, instead each node has a specific meaning or concept, Different types of connections between nodes can represent a variety of types of relationships between concepts and features.

Drift diffusion model

dots, imaging moving and moving in different time frames. Can set threshold track number of dots until threshold of 15 is reached. Tracking net difference n right - n left . Model widely used explains accuracy speed trade off. Chocie information and we can determine accurate or not and more accurate you are the longer you take to make your decision faster less acuraccy you will have. Drift diffusion can used to describe explain different stimuli tasks.

Feature detection theory pandemonium model

each demon has its own task in the model . letter R next layer feature theory the image demon shouts what the image looks like. like middle diagonal top straight line bottom sees a curve line. Each feature demon shouts when it sees its own feature. Each cognitive demon shouts when it hears features that correspond to its own letter, the one with the most features shouts the loudest. Decision demon chooses whichever of cognitive demons shouts the loudest and recognition has occured.

Perceptron Mark 1 1956

first ANN, also strengthened connections between nodes in order to learn, used supervised learning as opposed to unsupervised learning. Ex: one input layer, one output layer see nodes in output layer and input. Fully connected in input layer has a connection to every node in output layer. The output layer then compares its output to the desired output called the teacher. Learned how to strengthen connections between the nodes in order to learn how neural networks learn but adjusting weights between the nodes. With feedback, adjusts weights until output matches desired output.

Feature integration theory

focused attention stage: later stage in pattern recognition that requires concentrated attention under voluntary control. Features are combined and integrated. This is a a slower more controlled process

Connectionist model of English past tense results

give regular and irregular verbs as inputs, train the model correct and incorrect past tense conversions, model shows similar results to human data.

Video summary

google translate example neural network deep learning to machine learning, neural take in data train to recognize and produce outputs. Neural networks neurons core processing input layer, output final. in between hidden layers performing computations. output with highest layer fires, associated with highest probability. Network can be trained error magnitude value information then transferred backpropagation weights are adjusted then correct answer computed.

The cognitive and neuroscience approach

how do we recognize objects and patterns.

evaluating feature detection theory

how do you define what counts as a feature? This accounts well for bottom up processes, what about top down processes.

Feature detection theory

image broken down into its components features . A feature is a part of a subset of an object. Features are combined in unique ways to form different objects. Letter A can be specified by short horizontal line and two longer diagonal lines as its features

Forward propogation

information flowing forward through the model. input units are activated to a picture of a cat, nodes small ears, fur, short snout. These nodes send activations using their weighted connections to the hidden units. Hidden units send weighted activations using their connections to the output units.

Template matching theory

internal mental representaiton of stimulus is drawn from long term memory compared to current perceptual stimulus. Requires a difference operator some way of comparing how similar or different the long term representations are. How well do they match the template in your mind, some way of comparing how similar or different.

Connectionist model EX:

math input 1,0,0,0,0 to ouput 1,0,0,0,0 can signify say a mammal a cat

Evaluating feature detection theory

model can explain why people make mistakes, like recognizing an R instead of a B. This has a neural basis there are feature detectors in V1.

Multilayer perceptron

more complex networks were developed to address this, multiple layers with additional of hidden units, uses forward propagation and backpropagation. Neurons connected to each other copied in order to get computers to work the same way nodes connections can have different strengths in node and network as well represented numerically using math in order to perform computations in neural networks involve lots of complex stuff.

EVALUATING TEMPLATE theory

not realistic way object recognition achieved, might work when stimulus does not vary like reading numbers off the bottom of a check . However there are many possible variations for a given stimulus. This theory says there is a separate template for each variation of every stimulus, which is unlikely.

Applications of computer vision

object detection: CNNs can deter instances of semantic objects in a visual scene. For example is there a bowl present in the image to the right? Type of tasks it can do. Facial recognition software google face net, deepface, use CNN to achieve facial recognition. Action and activity recofnition CNNs can classify if people are playing volleyball human pose estimation CNNS can estimate human poses.

Bayes Rule

p (z| data) = p (data|z) p (z) / p (data) probability of data given our value z. Can visualize what map is doing just by looking at it.

Feature integration theory and visual search

parallel search: black T seems to pop out in the display. Search is done in parallel, meaning no matter how many distractors are present in the display, the T is perceived automatically. This is shown in the reaction timedata on the graph, where thereaction time remains the sameregardless of the number ofdistractors in the display.• This shows evidence for the pre-attentive phase of featureintegration theory, where anobject's features are identifiedautomatically.

The drift diffusion model

parameters : Drift rate, threshold, Noise . Each combination of parameters will give different choice and reaction time data. Raise your threshold and you are more likely to answer correctly, but youll take longer to make your decision.

Simple task are the dots moving left or right

participant says right

Spreading actiavtion

person who just heard automobile faster to recognize truck than skateboard. Spreading is throughout to underlie retireval of information from long term memory, semantic priming demonstrates procesing of a stimulus is faciliated by networks exposure to related stimulus.

Review Information processing models vs connectionist models

sequential order of operations, discrete units, info processed one stage then passed forward, happening one at a time. Connectionist knowledge is representet as pattern of activations or synaptic strengths distributed through a network. Ex: cat may be represented through a pattern. Connectionist use parallel processing. Parallel processing occurs through simultaneous activations in networks. Activations in all layer happen simultaneously. Individual node takes in inputs gives out outputs lines are the connections between the nodes.

Turn hypotheses into algorithims did she randomly guess

step 1 generate random number between 0 and 1 assign to an R value chose left and R <.5 right

How do we recongize objetc?

template , feature, recognition, computational, feature

Evaluating feature detection theory..

there is an interaction between bottom up inputs and top down processes to aid in recognition. Bottom up inputs are raw stimulus information. Top down processes are things like prior experience and expectations. Top down processes are things like prior experience and expectations. Bottom up inputs enter the visual system, and are met with predictions from top down processes. EX: if you go to a grocery store expecting to see food. Aspects of the bottom up stimulus that do not match higher prediction are sent back up to higher processing areas. For ex: your grocery store is having a special event, and there are balloons at the entrance.

Feature detection theory pandemonium model 2

there is neuroscientific evidence for this, simple features like lines and angles are coded in area V1, complex features made up of simple fetures are coded in IT of the temporal lobe.

Recurrent networks

this network takes in a sequence of inputs, doesn't need a teacher but uses the next input in the sequence to generate the error and adjust the weights of the model. Useful for learning the rules of letter sequences in words adn grammar of word sequences in sentences as well a s relationships between word meanings and grammar. Can change number of nodes in each letter, change hidden layers , a lot you can do to change structure and functionality. Can change where feedback occurs. Predicts next sequential input acting as feedback and adjusting feedback.

Perceptron Mark 1 1956 . .

this simple network could learn to recognize simple patterns such as vertical and horizontal lines, and perform simple classification tasks. First neural network can function pretty well.

Things to consider

units and connections are not one to one correspondence with actual neurons and synapses. There is no single computational model that can encompass the entire mind and brain.

Perceptron Mark 1 1956.

unsupervised learning, single layer fully connected to output layer. A single layer is fully connected to an output layer. Output layer then compares its input to the desired output called the teacher. Using this feedback it adjusts its weights until output matches desired output. Output of model compared to actual right answer then input layer passes to output layer and the model gives you an answer. Supervised learning is the fact error signal generated from. When model wrong it can use feedback and change weights between nodes so next time it will generate correct output.

What to do with the model fits?

we can use these to make inferences: we can fit the threshold and drift rate parameters for each individual subject. Do different people have different thresholds for making responses? Their model fits will show this in different values for z. We can also use these to compare models.

Different connections have different weights. Weights are adjusted through backpropagation so that the model produces correct output.

weights correspond to different output units from the input units. No input weight should be 0 for human because we can not fly. How weights correctly and incorrectly produces output.

Connectionist model of acquisition of inflecironal morphology

what is inflectional morphology? What are the rules around transforming words from present tense to past? They are not always consistent in the english language. children learning english language how do you transfrom from past to present tense. English past tense regular walk to walked. English past tense (irregular) swim to swam, build to built, think to thought. English past (arbitrary) go to went , hit to hit from different to same word.

Evaluating feature detection theory...

what is the middle most figure? context has influence, often times context and higher level knowledge aid in recognition. Top down expectations of seeing a dog aid in recognition of the dalmation.

What networks do

- Model computational properties of groups of neurons not detailed neurophysiology of small groups of neurons n( not comput neurosci) - they tend to focus more on overall system function or behavior- they give us formalism for understanding how cognitive processes are implemented in the brain, and how disorders of the brain function lead to disorders of cognition.

Marr's computational appraoch to vision

1982 specifies the steps a computer would go through to recognize an object step 1 raw primal sketch: contains an image represented in terms of its distribution of intensity values or areas of light and dark step 2 2 1/2 D sketch: this contains an image representation that includes information about surfaces and layout. step 3 3D sketch: these contain ma 3D image representation in which object parts are linked together by axes of symmetry and elongation.

Review Which best describes why template matching theory not a realistic way that object recognition is achieved?

A There are too many possible stimuli in the environment for each of them to have a unique template stored in the brain.

Semantic network

A nodes activity can spread outward along links to activate other nodes. These nodes can activate their adjacent nodes a process called spreading activation. This activation energy decreases with increasing distance. Concepts such as automobile and truck are semantically related, will ahve direction connections between them. If automobile node were activated it would be faster and easier for the activation to spread to truck than to skateboard. Concepts such as automobile and skateboard as less related and would only be indirectly connected through intervening concepts.

Which layer in a modern neural network contains a representation of the response that is then matched against the teacher? review

A output layer

Perceptual categorization deficit

A type of apperceptiveagnosiaDifficulty recognizing objectsthat are viewed from unusualangles

Visual Agnosia

An inability to recognize avisual objectAssociated with damage tobrain regions in the ventralvisual pathway, areasresponsible for carrying outvisual object recognition

Prosopagnosia face perception

Another type of agnosia wherepeople have difficultyrecognizing and discriminatingfaces.They can perceive faces, butcannot identify the face ordiscriminate one face fromanother. In humans, cells that respond tofaces are found in the fusiformface area (FFA), located in thetemporal lobe.

What is attention?

Attention is concentrated mental activity where processing resources are allocated to different sources of information.

What is the perception Mark 1 used for? What can't perception mark 1 do?

Binary classification tasks like cat or dog. Tasks where you put something into one of two categories. The two categories must be linearly separated from one another. Can NOT learn non linear functions, clear separation could not. One the left is a linear function inputs can be categorized into one of two categories, separated graphically by diagonal line. Take away not so easy to categorize as one or the other. On the right is a nonlinear function inputs can be categorized but not seperated graphically by the diagonal line.

bottom up vs top down processing

Bottom-Up: Sensory receptors → Brain Top-Down: Brain → experience/expectations to interpret sensory info

Cognitive approaches to attention

Broadbent's filter modelTreisman's attenuation modelDeuth-Norman's memory selection modelKahneman's capacity model

Broadbent's Filter model

Broadbent's model is an early selection model. This means it is a model of attention where information is selected early, based on physical stimulus characteristics. Broadbent's model filters out information before it can be recognized. Information is completely blocked at the filter based on somephysical characteristic. For example, the channel coming into theleft ear is blocked, and the channel coming into the right ear isallowed to pass through.

Review For triesman, serial search demonstrates a linear function because

C The more distractors, the more time it takes to search

In computational modeling, how can we use parameter estimates obtained from model fitting?

C bayes equation parameter estimates to infer? We can infer about the internal cognitive processes of participant data the fits are based on

Which of the following best characterizes an artificial neural network? review

C. A computer program that mimics how real neurons might perform some computation

CNN

Convolutional neural networks are inspired by human visual systems structure. They have 3 layers convolutional layers pooling later and fully connected layer. Using convolution kernels, network generates feature maps of the image. Detect features such as edges, vertical, horizontal lines, bends etc. Want center to be closer to 1 and edge 0 take smaller square and slide along image fyi.

Review Which of the following is not a property of geons in recognition by components theory

D Feature detection

What can we learn from artificial neural networks about the brain?

D we can learn the neurobiolical underpinnings of neurological disorders.

Two main visual pathways

Dorsal pathway - "where"informationWhere/how pathwayA path that travels upward tothe parietal lobe whereinformation about motion andlocation is extracted.Contains information aboutmotion and location of anobject. Ventral pathway - "what"informationThe what pathwayA path that carries data aboutcolor and form and travelsdownward to the temporallobe.Contains information aboutform and color of an object

This is likely oversimplied

Drift diffusion model says to track the net difference in number of dots moving right and left until you reach a certain threshold. Ex if you are tracking number right number left, value of 15 means there are 15 more dots moving right than moving left once threshold is reached a response is made.

A convolutional neural network uses feedback from the next

F

Template matching theory 2

For each object we encounter there is a template being compared against, high degree of overlap recognition occurs, low degree of overlap recognition does not occur

Recognition by components theory

Geon is a basic volumetric shape such as a cube can be usded to recognize an object, combinations of these shapes make up all possible objects in the world. 3 proerties of geons 1: view invariance- geons can be identified when viewed from many different perspectives. 2) Discriminability you can easily tell different geons apart, even when viewed from multiple different viewpoints. 3) Resistance to visual noise, geons can be percieved even when many of the contours that make them up are obscured. This theory accounts for object constancy (invariance) and occlusion.

Recognition by components theory

How do theories explain the phenomenon of object constancy? Color constancy: cells in V4 continue to respond to the same surface color even if the light source is changed

Neural coding

How do you represent yourgrandmother?• Specificity Coding vs DistributedCoding NOT TRUE --> A single cell fires in response to thepresence of a particular face butnot in response to any other face.• For example, a grandmother cell isa cell that fires only when you seeyour grandmother. Specificity coding• This is not very likely to be true,there are far too many stimuli inour environment to code for. TRUEEE Distributed coding:• A specific face, such as yourgrandmother's face, is coded forby a specific pattern of activationamong a group of cells.• This is more likely to be true, as itallows the same neurons to codefor many different stimuli.

Discussion 4

Human vision is not just about capturing images its about making sense of the world around us...

What are some challenges to computer vision?

Humans can seemingly effortlessly see the world, we can easily count the number of faces in a picture, even infer emotion of the faces as well. We can not only recognize objects, but seamlessly separate them from their background. Been very challenging to teach computers to do simple tasks. How do we teach computers to do this? We can use neural networks to achieve this. Specifically, a convolutional neural network CNN

Visual agnosia: Apperceptive Agnosia

Individuals have difficulty inassembling the pieces offeatures of an object togetherinto a meaningful whole. Individuals can perceive a wholeobject, but have difficultynaming or assigning a label to it.

What is the network approach?

Influenced by principles of operation and organization of real neurons and brains. [How neurons communicate with each other] There are inputs to each node. This mirrors the dendrites that receive inputs from other neurons.

Evaluating Broadbent's filter model

It does not explain theCocktail party effect:Words of personalsignificance should befiltered but are not, eventhough they do not fit thecriteria for stimuluscharacteristics to bepassed through.

Recognition by components theory .....

It works better with larger differences within categories like difference between a dog and a whale. Does not work well with smaller differences between categories like difference between huming bird and a dove

Hebbian Learning 1

Learning happens through the strengthening of connections between neurons. We can think of the connections between neurons as our "weights in ANN. When two neurons fire together, their strength of connection is increased. The neurons that fire when an individual sees a banana repeatedly fire together every time they see the banana, and a cell assembly is formed. Foundation of learning and memory is called unsupervised learning. No one is designing these neurons to do this it is a byproduct unsupervised.


Set pelajaran terkait

Gardner's Theory on Multiple Intelligences

View Set

Government and Economics Unit 2 Test

View Set

Computing Environments Final fill in the blank

View Set

Primerica guarantee test questions

View Set

Organizational Behavior Chapter 7 Assignment

View Set

Communication 1100 quiz questions

View Set

Constitutional Violations and Reasonable Expectation of Privacy

View Set

Animal Science Industry Certification Study Guide UPDATED

View Set