PSYC 153 EXAM 2

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

We use metacognition quite a lot!

"I'm trying to recall a funny story I heard" "Am I paying good attention to the lecture?" (monitoring your own behavior)

Variability in input - a problem in categorization and recognition

(Could also be a challenge for AI developers!) (potato that looks like teddy bear; apple that looks like face) Incomplete rendering of the whole object Generalizability of one exemplar (view) to another Within-category variability: -> a large amount of exemplars needed for successful recognition in challenging scenario

Dynamical System and Embodied Cognition

(Do robots use pre-stored mental reps to avoid running into something? - NO instead use real time sensory feedback) (Embodied cognitive scientists see every human being as a dynamical system ~ little emphasis on internals reps which is very different from traditional PCS) Dynamics of *everything* involved in the task Little emphasis on the internal representations Also applied to robotics: *behavior-based* architecture (do not need internal reps) -> *real-time (sensory) feedback from the environment* instead of using pre-stored representation of events ~ reps of every object in it's way (i.e. does Asimo use pre-stored representations to avoid running into something? - Asimo knows how to walk around people while moving)

Turing test (Alan Turing 1950)

(Is this conversation between two human beings?) (Imitation game; if convo can be judged as human) Simply put: can you tell if the response is generated by human or a machine? Goal of the test: Can machine think? i.e., does it have a mind Criterion: *~30%* time judged as human's response

Machines can be trained to recognize and categorize objects

(Machine learning: stimulus -> input layer/ receptors -> hidden layer/ complex classification -> output layer) ...Now let's discuss human object recognition, categorization and the formation of concepts

Meta...cognition?

(Something important in our cognition: How do we monitor our own performance?) *When* and *where* did you first hear about this concept? (you use your metacognition when recalling the answer to this question)

Turing machine: a primitive architecture (Alan Turing 1936)

(The Imitation Game) Simply put: a primitive computing device for input processing and output generation Basic components: -An infinite memory tape with squares (states) -Only 3 codes: 1, 0, blank -A read-write head that can move left and right -Transition *rules* (Hidden layer) -Execution (early scientists use TM as metaphor for how mind works) How does Turing machine actually work? 1. Select an example (a preprogrammed task) 2. The machine executes it and determines if the input matches the rule given in the task

Is Mental Representation Still Needed In Other Scenarios?

(before mostly looking at action reps) Analogy from other species (Wilson & Golonka 2014): -female crickets look for males with the loudest songs -> do they have pre-stored representation of the songs? (impossible -> must be another mechanism) -> biologically adaptive system

Strong vs. Weak AI

(can machines think?) Strong version: machines can have a "mind"; cognitive processes are simply symbolic operations Weak version: Computer models as a tool to study human mind One requirement of an AI system to be considered as having a mind -Ability to "duplicate the *causal powers* of brains" (Searle 1990) (cognitive processes; neural representation ~ change of state)

OK...so how are these processes related to metacognition?

(each IMPLIES the other going down the line except for SELECTING which can function by itself) Selecting: what info should I look for? *-->* Maintaining: I need to monitor what I'm listening to *-->* Updating: New info is coming in. I need to try to remember this new info now *-->* Rerouting: I should now switch my attention to the slide

Interactivity In A Dynamical System: How Do Robots Learn To Play Ping Pong?

(ex. of action perception coupling & being a dynamical system) Both A and B are constantly affecting AND being constantly affected by each other's behavior Both A and B are dynamical systems: -> "A" changes as a function of B's behavior -> "B" changes as a function of A's behavior A and B are *"coupled"* systems (if you miss a ball -> DEcoupled action)

How do we test "rerouting"?

(i.e. looking back between slides & notes) Correlated with selecting -> Select a new target of attention: Attention shift and control Rerouting is also involved in: -Dual-tasking -Divided attention ->Being aware of the change of attention target at the same time -(i.e. Is it a word? TOGAK (in red); Is it blue? POGAF (in blue)) -(alternate version of selecting task) What does it measure? -The time it takes to switch (or "reroute") from feature A to B (e.g., from "word" to "color") -Accuracy on the switched trial (i.e. Feature B) -*Inhibitory control* (must use executive control to inhibit want to answer color over whether is a word)

View-invariant vs. View-dependent features

(now can you recognize and categorize this object? (AVOCADO); what went wrong? (FROG); how do we recognize and object from a different view?) What features are view-invariant? -> skin, color, texture What features are view-dependent? -> shape, legs, eyes, mouth

How do we know if we are doing things right?

(real time sensory feedback) Feedback from output Self-monitoring: -Memory retrieval -Attention -Working memory -all 3 monitored by Metacognition

Why connectionism

(simulation of your neural network) What's going on inside each "black box"? The notion of "system" too vague: -What is a system composed of? -How does each "black box" exactly work?

Machine Learning

(training machines to process info) Statistical approach Learning via the artificial neural network (i.e. an *artificial brain!*) Training data (tell model what's right vs wrong) --> machine learns the patterns --> classification

Hidden layer: what's being hidden?

(when talking w/ someone, can't see what's going on in there brain (hidden layer) but you can see there output ~ hidden layer between input and output) How does Siri understand and process your request?

Functions of a concept??

*1. Simple categorization* (e.g., distinguish an apple from an orange; *between categories*) -very general, easily distinguished *2. Complex categorization* (e.g., judge whether an apple is honey crisp or Fuji; *sub-categories*) *3. Linguistic meaning:* for communication and defining semantic relations with other words/concepts -attaching meaning to concept (what differentiates humans w/ AI) *4. Components of cognitive states:* building blocks of beliefs, thoughts, etc. -> *Belief system!*

Behavior-Based Architecture

*Cricket robot* (Clark 1999) -(female crickets find mates via the loa -"oh ~honey~ Be my valentine!!" --> "you're mu sunshine... my only sunshine. You make me happy~" -(same behavior replicated on robots with only the signal receivers and action algorithm in response to loudness) -"not my valentine!" --> "%@#(&^@" (create robots replicating male ones "singing") -(ex. of perception action coupling ~ does not use internal reps to find mate) What's common in Asimo's walking and cricket's mating? -representations not needed -perceive -> match against pre-stored representations -> act == FAILED -INSTEAD == perceive <-- *real time feedback* (to couple action w/ perception ~ no intermediate level needed) --> act -both Asimo and cricket are dynamical systems!

How do we test "maintaining"?

*Maintaining:* real-time monitoring of the temporarily stored info ->Working memory tasks (e.g., digit span) ->Immediate recall -->Is "selecting" required in these tasks? (YES b/c it's the first thing you need to do to determine what you're going to attend to) Why is it important for metacognition? ->Need to know what information is accessible (or inaccessible) so that we can make adjustments to resource allocation

Off-Loading Cognitive Work Onto The Environment

*Off-loading* as a cognitive strategy: -one example: *"externalization of memory"* (i.e. notes -> save cognitive resources - save our working memory) We use off-loading all the time! -counting on *fingers* (external to the brain) - DIRECT uploading -doing calculations using pen and paper - DIRECT uploading -selective attention/ encoding - INDIRECT off-loading (i.e. only paying attention to highlighted words -> INDIRECTLY off load all other words onto environment)

Embodies View: *Task Analysis* In The Outfielder Problem

-(chain of actions rather than discrete steps) 1. what's the problem to be solved? 2. what are the available resources? (i.e. wind) 3. how to make use of those resources? 4. assemble the resources to solve the problem *-> task dynamics* (something that's implemented as a part of motor skills/ procedural memory ~ unconscious/ instinct)

So... What Are The Common Things In These 6 Views? First, What Are Some Key Words?

1. cognition is *situated* 2. cognition is *time* pressured 3. cognition is for *action* 4. off-line cognition is *body* based 5. the *environment* is part of the cognitive system 6. we off-load cognitive work onto the *environment*

One essential component of metacognition -Executive contro

4 major processes (can work together) in executive control: *1. Selecting:* direct attention to the target *2. Maintaining* (of input in working memory) (processing one at a time the relavant info) *3. Updating* (of temporary info in working memory) (new info to process) *4. Rerouting:* attention shift - be aware of change in the target of attention (reselecting; from A to B) Wait...I thought we are talking about metacognition. Why did we just talk about working memory??? --> Central executive = resource allocation "Executive control" (like a manager, in charge of allocating attentional resources to different parts of working memory component) How do we test each of these *1. Selecting:* Stroop Task *2. Maintaining:* typical working memory tests (e.g., digit span), immediate recall *3. Updating:* n-back task *4. Rerouting:* attention shift task

Can computer really pass the Turing test?

A fake 13-yo Ukrainian chatbot "Eugene Goostman" reportedly passed the test. Here is an excerpt of the conversation conducted by a journalist from Time magazine in 2014: "How are you adjusting to all your new-found fame? --I would rather not talk about it if you don't mind. By the way, what's your occupation? I mean - could you tell me about your work? I'm a technology blogger. --It's quite surprising that you are a technology blogger. Where do you came from, by the way? Could you tell me about the place where you live? I live in Boston but I grew up in Minnesota. I also lived in Seattle for a while. --Could you invite me to Seattle? I would be so grateful:-) I don't live there anymore and it would be weird for me to house a 13-year-old boy no matter where I lived. --Possibly, it will be a big surprise for you, but "no" isn't the only answer. The word "yes" also exists! :-) Wonna ask me something more? That last answer of yours was a little shaky, Bro-Bro. --Damn! I've just recalled that I didn't feed my guinea pig this morning. Poor animal! Ah, okay. We're changing the subject. What's your guinea pig's name? --Name of my guinea pig's name is Bill. But I suspect he doesn't go by it. Perhaps, he is deaf and I should call him "Beethoven"."

What Is A Dynamical System?

A highly adaptive and *interactive* system (w/ environment stimuli): -> *parameters change in response to feedback (cognition is time-pressured!)* (need to make quick adjustments) Parameters of muscular movements: -> angles, force, *timing*, degree of stretch, etc (Mind as Motion) (Can use mathematical equations) i.e. A simple linear model (y=ax+b): -let Y be THE TIME NEEDED TO ATCH A FLY BALL -B1 B2 B3 B4 be the weights of each parameter (X1 X2 X3 X3) (*the parameters can be internal or external*) -i.e. X1 = humidity, X2 = wind, X3 = launch angle, X4 = distance, c = ideal running speed (all examples of task dynamics) -Y = c + B1X1 + B2X2 + B3X3 + B4X4

Connectionist Models in Cognitive Science

A way to simulate a *reducible* phenomenon (also from the biological theory of intelligence.): -Features as discrete units/nodes Visualization of *mental representations (the traditional view of cognition)*: -Perception, recognition, production -Machine learning, AI

Various types of concepts

Abstract vs. concrete concepts: one way to distinguish the two: whether a mental image can be formed -concrete is easier to dorm mental image But a mental image of "knowledge", "ideas" and "abstract"?

Box-and arrow models

Advantages of box-and-arrow models: -Basic structure of information flow: stage by stage -Visualization of a theoretical framework -Relatively easier to model than connectionism -*BUT* neural underpinnings unknown (WHY connectionism)

Another example of embodied metacognition:

Alban & Kelley (2013): -"Physical weight" as a cue in metacognition (!) -Task: word learning and recognition -> Study a list of words and then later recall if a word is on the study list -> Rate *Judgment of Learning* (JOL) while studying the words on a scale of 0 -100 -> ("I'm confident that I can recall this word later, so I'll rate it 95 - very likely to remember)

Strong AI supporters' arguments

Although the man in the room doesn't know Chinese, the system (including the room itself) does. (!) We can simulate the system at the neuronal level (e.g., similar to a neural network/connectionist model) so that it can understand Chinese like a Chinese person. (but connectionist has multiple layers) The person in the room actually understands Chinese subconsciously (NO) Semantics (content) is not needed - just the rules and structures (syntax)

ChatGPT (Generative Pretrained Transformer)

An AI model trained by "*Reinforcement Learning* from Human Feedback (RLHF)" (OpenAI, 2023) -train machine like baby Could serve as a digital assistant Not really for casual chatting like other chatbots (Unlike replica) Could provide some tools to start with (but it may also contain false info like Wikipedia!) Can it pass the Turing Test? Debatable...depends on what kind of questions you ask. [Note: Always check with your course instructor about the proper use of AI tools. For this class, any unauthorized use of ChatGPT for assignments is not permitted.]

Two levels of analysis in metacognition: meta and object level

An example from metacognitive process involved in learning: -in the middle: metacognition knowledge -(Bergstra (2015)) -(a reflection on learning) -circling around metacognitive knowledge: -> Meta Level (metacognitive process; reflection on learning; level above action perception) --- Control (decisions on learning strategies) --- -> Object Level (learning task and learners' related cognitive activities; your actual performance/ action) --- Monitoring (information on learning process) ---

Within-layer Inhibition/ Activation

An example of sentence production: -thought of lunch -> "cook some pasta" -within the layer "concept" lunch activated energy which activated class; lunch also inhibits dinner, etc -within the next layer "word", pasta activates seasoning; pasta also inhibits books

Potential problems with using connectionist models and animal models without testing human subjects

Animal welfare....um...yes Most importantly, generalizability of models to realistic scenario: -Model predicts a signal over 90 dB considered as noise

A simple algorithm: Test-Operate-Test-Exit (TOTE; Miller et al. 1960)

Applicable to perception, recognition, problem solving, decision making (i.e., stimulus can be a "problem to be solved) Stimulus --> Test -- (Incongruity) --> Operate --> Test -- (Congruity) --> Exit Mental activity = execution of an algorithm

Summary

Behavior can be explained by perception-action coupling and the task dynamics (not kist traditional internal reps) -> Focus on performance and functions: dynamical system (ONLY actions reps NOT static) In simple embodiment, the notion of representation is NOT completely abandoned, but it focuses on the *dynamics* of representation and how *sensorimotor experience* affects representations In radical embodiment, representation of action needed, but not representation of an object Three key words in embodied cognition: time, adaptation, dynamics

Another feature: can be feedforward, feedback, or both (interactive)!

Bottom-up/ feedforward only (input creates outcome directly)

Do connectionist models and machine learning support strong AI?

Can we make machine "think" by using artificial neural network? ->How and what does it think? --> Not really (yet)...it lacks the true understanding of the meanings behind those codes/symbols Can we make machine "learn" by using artificial neural network? -> How and what does it learn? --> Yes. It can be trained to learn to classify things and retrieve info as requested.

The Notion Of Action Representation In Wilson's Views Of Embodies Cognition

Cognition is for action + off-line cognition is body based == representation of action (aligned/ accepted by the radical embodiment scientists)

Summary For Embodied Cognition

Cognition is not merely a centralized process in the brain but also a *whole-body experience* (mind + body + environment) Cognition should be understood together with the interacting environment Many of our cognitive processes can be explained in terms of sensorimotor experience (i.e. actions, body-based offline cognition, etc) -(evidence from fMRI - activation in pre-motor cortex when not actually moving)

Why and how do we categorize objects in the first place??

Cognitive economy: -Improved processing efficiency by forming categories -Extract distinctive features of objects for simple categorization --> Bypass routes in Rapid Object Categorization (feedforward only!) -INPUT (v1, v2,...), HIDDEN LAYERS (posterior & anterior inferotemporal cortex, OUTPUT (prefrontal cortex)

Flash back to connectionism: Representation of concepts

Collins & Loftus (1975) Spread Activation Mode -(long term memory as complex network made out of nodes connected w/ each other ~ parallel & spread activation) -(still based on traditional view ~ static, NOT embodied) -(can think of each node as a concept)

Summary on Computation and AI

Computation and cognition: the implications of Turing Machine in the study of mind: using machine as a metaphor for human mind Why do we study AI? -> Can we "create" a mind in a machine? -> To better understand what it really means by having a "mind" Strong vs. Weak AI debate: -> Does machine have a mind? Turing test: can computers think? Implications of Chinese room experiment: AI has syntax but lacks semantics; arguing against Strong AI Machine learning ≠ Machine thinking

Summary on connectionism

Connectionist model/Artificial Neural Network: -Look into the black boxes in box-and-arrow models -Originally based on the assumption of ("mental representation"* -*Parallel Distributed Processing (PDP)* -can be *feedforward or both feedforward and feedback* -A good tool to test theoretical validity (i.e., needs experimental findings and/or theories to justify the model) ~ Theory first -*Simulation ≠ Real human performance* -*Machine learning:* classification, recognition and prediction based on the training data (e.g., suggested search terms in Google)

Is Mental Representation Completely Useless In Radical Embodied Cognition?

Depending on what kind of representation! -> *action-oriented representation* (Clark 1999) -another fMRI evidence for representation of actions (previously discussed perception of dancing & speech) (Shmuelof & Zohary 2008): -viewing own actions of hands in each visual hemisphere -activation found in *anterior parietal cortex* -left hand -> right hemisphere; right hand -> left hemisphere (regardless of left or right visual field) (always contralateral) -(IMPLICATION: actions reps are stored & are shaped by sensory motor experiences; all other reps are not excepted)

Categorization vs. Identification: categorization is easier... why?

Does it belong to the category of "fruit"? -categorization: lots of possible ways to categorize -identification: knowledge of the labels required --> being able to categorize A and B DOES NOT EQUAL being able to identify A and B

Some Features of a Connectionist Model

Each node is analogous to a neuron Activation determined by connection weight (you learned this!!) (joker example) *Activation and inhibition* can occur in the same layer Activation can be parallel: -*Parallel Distributed Processing* -*Spread activation* -(several nodes can be fired together (in parallel) & be distributed in multiple nodes

The Role Of Representation In Embodies Cognition (Part 2)

Embodied view of the problem: -NOT mental representation of the path -NOT representation-based computation Perception - Action coupling: (Cognition is for action) -resources from the environment -kinematics of the ball (motion) (eye- -hand coordination) -pro outfielders even judge by the sound when the bat hit the ball == (all under task dynamics)

Summary: The Outfielder Problem

Embodies view of the problem: -(traditional view = static) -one word to cover it all: *DYNAMICS*... of what? THE WHOLE TASK -task as a *series of events* that go hand-in-hand -representation-based computation *not needed*

Source monitoring and encoding specificity

Encoding specificity: store an event along with its *associated context and details* in episodic memory: -->Allows us to recall *the origin of an event* or mental state -(related to state dependent encoding b/c also encode emotion related to event) *Recalling an event itself* (not metacognition) may also (but NOT always - i.e. defining terms)) activate its associated context and details at the same time. *Recalling just the context/source -> metacognition*

Why connectionist model and animal model provide more flexibility than research on real human subject

Example: to investigate at what dB level a signal is classified as noise -super load signal -> -input layer "auditory nerves" / receptors -> -hidden layer "auditory cortex" / classification -> -output layer

Refresher: Some similarities between a computer and human cognitive system

Goal of computation: input to output *Functional descriptions* of a device - physical properties (what PCS lacks) *Functional architecture: the key components in the system* (PCS pro) Mental activity: Execution of an algorithm: - *Debate about AI: can we infer that machines have a mind?*

Categorization vs. Identification: which task is easier?

How would you categorize these two kinds of fruit? (orange and leechee) -(categorization needs previous representation)

Action Parameters: What Are Those?

Imagine you're playing a video game to control a baseball batter: control the power, speed, angle, direction, etc

Neural network and machine learning

Implement an algorithm to train machine to learn to categorize input A real-time simulation: -noise to disrupt feedback -hidden layers increase = better classification --> need multiple brain regions to achieve optimal classification

Syntax vs. Semantics in AI?

Is the brain's mind a computer program? -> No. a program merely manipulates symbols, whereas a brain attaches meaning to them

Concluding remarks on metacognition

It's an essential component in our cognition to monitor our mental states and cognitive processes Highly dependent upon *executive control* *Sensorimotor feedback* also influences metacognition!

So What Do We Know About Embodies Cognition From Those 6 Principles Proposed By Wilson?

Key concepts of key words Embodies cognition is about: -time!! -adaptation/ environment!! -body/ action!!

Questions

Learning new vocabulary while practicing weight training? Any benefits? Embodied cognition vs. Metacognition - Any connections!? -metacognition itself is embodied ~ ifluenced by sensory motor experiences

Representation Of Action Also Affects How We Understand/ Use Language!

Learning the meaning of words can be tricky! -(STATIC is only process word & not meaning/ implication; reading words is actually DYNAMIC ~ study of throwing balls & saying word, when recall, motor cortex is also activated; why when studying, must process & not just memorize which is the traditional view of cognition) When we say: -we learn the MEANING of the words (i.e. a verb) -> mental representation of CONCEPTS (TRADITIONAL VIEW: *STATIC*) (disembodies, traditional view) But if we say: -we learn how to *USE *the word (i.e. a verb) -> implication of *functions and dynamics* -> mind, body, environment (embodied view) ~ context How do we test representation of action in language? -orientation judgement task (i.e. "the flower is in the vase" -> correct orientation?) -> faster responses when the object orientation matches the description -> effect of representation of action *Q: does this task involve offline, situated or both?* (A: both - situated to analyze & understand PROLEM, offline to think ab orientation) -(supports body based offline cognition & effects of reps of action ~ dynamic)

Computation and cognition

Main questions: -Is our mind a cCOMPUTABLE ENTITY? -How can we explain cognitive processes and behavior in terms of computational models? One major assumption: Representation (again!): -Materialist view on representations (try to make more concrete) -Something physical to operate on -Rules (e.g. algorithm)

Simple VS. Radical Embodiment (Clark 1999)

Major difference: the notion of *representation* -simple embodiment: *how actions and sensorimotor experience affect mental representations* (i.e. memory) (still has some wort of notion of representation) (reps are DYNAMIC, not static) Radical: -*replacement hypothesis:* mental representation NOT needed (can ONLY accept action reps/ gestural reps; all others outdated) -focus on integrations of *mind, body, and environment: distributed cognition*

Two major components of a concept

Miller & Johnson-Laird (1976) Core: key defining properties Grandmother: parent's mother, female, has grandchildren Identification procedure: typical perceptual features used for categorization Grandmother: grey/white hair, some wrinkles on face, etc. (ideally have both core & identity in categorization procedures) (sometimes judged on only one type)

Can it really account for human behavior?

Model simulation vs. Real observation of behavior (will not be exactly the same) Based on the assumption of *computability of mind* *Representation* of each node/neuron = is it really like a feature/letter/word? (can use connectionism to explain neural sensory inputs, types of reps, etc b.c model of neural networks)

Why Connectionism

More detailed visualization of a "system" or cognitive process (e.g., object recognition) An attempt to simulate how brain works for a cognitive process: -Artificial Neural Network (ANN) (same as connectionism) -Specific notion of how a node/neuron is activated or inhibited (e.g. math equations) (equation of firing a neuron - joker example)

How do we test "updating"?

N-back task: -i.e. 3-back task In a series of numbers (presented either visually or auditorily), press a button whenever the current number matches the one presented earlier (i.e. 3 trials back) (i.e. 3 6 2 9 0 8 9 == press button on second 9) Verbal fluency task: -(used for people w/ dementia, etc) -Example: You have one minute to come up with all the words starting with B, BUT NO REPETITIONS! What does *updating* have anything to do with the verbal fluency task??? 1. *Monitoring* the words you've produced 2. Update current responses 3. Prevent repetition

Hidden layer: a "ghost" layer between input layer and output layer

One example: reading out loud (Seidenberg & McClelland 1989): -Orthographic/written units (input): letter strings (, , ) -Phonological/sound units (output): individual sounds -Hidden: letters-to-sound *conversion rule* (e.g., à [f]) -STILL a connectionist model! look at the number of units in each oval --> oval instead of drawing 400 separate nodes

Another application of machine learning: face recognition

Overtime, recognition gets better w/ more inputs -> machine learning Train computer to recognize human faces: -Input layer: the computer identifies pixels of light and dark -Hidden layer 1: the computer learns to identify edges and simple shapes -Hidden layer 2: the computer learns to identify more complex shapes and objects Output layer: the computer learns which shapes and objects can be define to define a human face

The A-Not-B Error: Explained In Embodied View

Q: why do infants keep looking for the lion in place A but not B, even though it was revealed that the lion is hidden in B? Embodies/ dynamical system view: (another, different way to explain the conditioning) -failure to find the hidden object in place B -> all the action parameters were set up for reaching the object in place A (cannot make the adjustments in time) -> immature dynamics of reaching behavior -> *DE-COUPLED* perception - action (can also explain why some baseball players miss new types of throws)

The A-Not-B Error: Tricked By Conceptual Representation

Q: why do infants keep looking for the lion in place A but not B, even though it was revealed that the lion is hidden in B? How would a radical behaviorist explain this phenomenon? Traditional View: -immature concept of object: treated A as the fixed location of the object (object permanence) -selective attention (attentional blindness) -behaviorist account: reinforcement and *conditioning*

Side Note: Factors Affecting The A-Not-B Experiment?

Q: why do infants keep looking for the lion in place A but not B, even though it was revealed that the lion is hidden in B? What factors might affect the result of this experiment? -distance between infant and object -number of "A" trials before switching -delay between hiding and search -type of object -(change type of reinforcement, age, etc)

Box-and-arrow vs. Connectionist model

Recognition of "Mopisailo" Connectionist model more abstract & harder to interpret

Flash back to Norman's model:

Recognition of "Mopisailo" (physical stimulus -> sensory transduction -> .....) What box-and-arrow models can and cannot tell us? -lacks dynamics - interaction w/ environment -*doesn't explain anything ab whats INSIDE* "regulatory system" or "physical stimulus", etc (WHY connectionism)

Is executive control the same as self-control (a broad and generic term)?

Related but NOT the same thing! Both require self-control, but executive control has more to do with *allocation of cognitive resources* (executive control involves self-control, but also much more ~ more broad)

Application of machine learning

Search engines: suggested search terms, top hits, etc. Speech recognition: noise vs. signal Machine translation: grammatical vs. ungrammatical Users' browsing behavior: suggested videos, trending topics, etc.

Are these processes independent of each other? (NO)

Selecting: look for keywords in a lecture *-->* Maintaining: keep those keywords in working memory *-->* Updating: Update working memory when receiving more info *-->* Rerouting: Switch attention to the visual on the slide

Embodied view of metacognition

Sensorimotor experience in metacognition Alban & Kelley (2013): -Major manipulations of the experiment: -> Exp. 1: 50% of the JOL done on a "light" clipboard; the other 50% done on a "heavy" clipboard -Major manipulations of the experiment: ->Exp. 2: word attached to a box with varied weight (as shown on the right) ->Exp. 3: larger difference in weight between light and heavy boxes -Results: Weight matters in metacognition!! -> Words attached to heavy boxes received higher JOL ratings! -> (for all three experiments, "heavy" bar higher than "light" bar)

Interim summary: Similarities and differences between box-and-arrow and connectionist models

Similarities: -Input processing and output generation -Both can have bottom-up and top-down processes -Both can be interactive (need bottom up/ feed forward for top down/ feedback) Differences: ...A lot -*Parallel distributed processing (PDP)* in connectionism Computer metaphor vs. Neural network metaphor

Carrying a heavy backpack while studying

Skulmowski & Rey (2017): -Subjects divided into two groups: with and without a backpack -Task: learning 21 words (of varying difficulty levels) -Assessments: *Judgment of Learning (the likelihood of recalling this word afterwards: a METACOGNITIVE measurement)* and word recall task (a general cognitive measurement) Results: carrying heavy stuff helps memory recall (cognition) and even gives you more confidence (metacognition)! -(x = physical effort - backpack & control; y = mean judgement learning (how likely do you think you can recall this word? - metacognitive measurement); both bars higher with backpack; 1 to 2 syllables bar higher than 3 to 4 syllables) -(x = physical effort - backpack and control; y = mean retention score (actual word recall - cognitive measurement); backpack bar higher)

If Model Comes Before Theory...

Some potential problems: -Modeler/Experimenter's bias (want model to work - subjective) --> i.e. if we predict that "mouth" will be the most salient feature, then we make the connection weight of mouth the largest of all... -Determination of connection weights -Failure to explain real human performance -Lack of a priori assumptions about how the model should work (should have hypothesis based on previous research)

One example of metacognition: Source monitoring

Source monitoring: specifically recall *only the origin of an event* or mental state: -Where did event X happen? -When did event X happen? -How did event X happen? --> How did I get to know person X? (BUT: When you recall *"what"* the event is about, i.e., the event itself, it's NOT metacognition!) (the apple is the "event" and the branch is the "context (source) of the event" - metacognition)

Train the machine to classify the "berries"

Stimulis - > Input layer "visual input" / Receptors -> Hidden layer / complex classification -> Output layer

How do we test "selecting"?

Stroop Task: name the *color* of each word: -> *"Select"* the task-relevant feature

Another feature: hidden layer - What's being hidden?

The *conversion process from input to output* input layer --- hidden layer ---> output layer -can have as many hidden layers a you want -more COMPLEX = DEEP LEARNING

Chinese room experiment: To test the validity of Strong AI

The old man's task: takes the Chinese message and generates an output in Chinese using his "rule book" Does the man in the room need to understand Chinese to do this task? (NO) -person throwing IN a message in Chinese -person in box w/ "program" book -person waiting for the output in Chinese

A Bittersweet Problem

The role of representation in embodied cognition? How do outfielders catch a fly ball and even make a diving catch? (The outfielder ~ famous problem in child cognitive development)

Is Connectionist Model a Theory?

Theory vs. Model: which comes first? Some essential components of a theory: -*Generalizability* -Clear *predictions* about the phenomenon in question -Provides an account for why an ALTERNATIVE THEORY is not valid Model: a tool to test the validity of a theory

Why Selective Attention/ Encoding Is A Type Of Off-Loading

Two meaning of "off loading": 1. *actively externalize the cognitive load* onto the environment (i.e. taking notes - "EXTERNALIZATION OF MEMORY") (ACTIVE; DIRECT) 2. *select the most relevant info* for the current task and leave everything else out there: (INDIRECT) -highlight text -try to remember key words/ concepts from a lecture

Some tips to understand a connectionist model

Understand the theoretical framework and hypotheses behind the model first Identify the direction of connections (uni- or bidirectional) Identify what each node represents in each layer Identify within- or between-layer connections (facilitation/activation or inhibition)

The Role Of Representation In Embodied Cognition

What do we typically do to catch a fly ball? -eye on the ball -estimate where to intercept the ball by its velocity and angle -align ourselves with the projected path of the ball as a straight line -make final *adjustments* to intercept the ball == All about TIME == but how do they have this intuition? Assumptions of the *traditional view:* -MENTAL REPRESENTATION of the projected path of the ball (impossible to have mental reps for EVERY fly ball & every time not the same) -efficient mental computation and simulation using the representations (not reasonable given time frame) -body as the executor of the action plan Linear processing cycle (Clark 1999): -*perceive -> compute -> act* (simple box & arrow model) Potential problems: -perfect and fast computation required in order to catch the ball -LOT of representations needed!

Implications of the Chinese room experiment?

What does this experiment tell us about the Turing test -> Passing the test ≠ understanding natural language What does the machine (or the person in the Chinese room) actually do? -> Manipulation of symbols; pattern matching Does the Chinese room experiment support strong AI? (NO)

Categorization and Justification (Landau 1982)

Which one is grandmother? Why? -(use different mechanisms i.e. age, w/ child) Judging by age -> perceptual features -> *identification procedure* (e.g., by appearance) Judging by the defining feature (having a grandchild) -> *core of concept* (i.e., by definition) Another potential dilemma for AI developers: -> Train a machine to categorize an object/person by appearance or definition (or, ideally, both)?

Connectionist Model vs. Animal Model:

Why connectionist model and animal model provide more flexibility than research on real human subject: -Animal model: Test on animals to infer a model for human behavior -Connectionist model: design a model to process the input and generate the output McCloskey (1991): -Some phenomena not easy to test on humans: --> Perception of intolerable noise --> Influence of intolerable drug dosage How does a connectionist model provide such flexibility in modeling cognition?

FAQ: Can some nodes be left unconnected?

YES! - irrelevant features -impossible to see ALL nodes in certain layers ~ researchers only show relevant nodes

So...how good is ChatGPT (Generative Pretrained Transformer)?

YOU: How are you today? CHATGBT: "As an AI language model, I do not have emotions of feelings, but I'm always here and ready to assist you with any questions or tasks you may have. --> Failed Turing test YOU: I read that bipedalism has a lot to do with human cognitive evolution. Have you heard anything ab that? CHATBGT: Yes, that's correct. The evolution of bipedalism....

Can a model work without feedback (i.e. feedforward only)?

Yes! E.g., *Rapid object categorization* (Serre et al. 2007): feedback not in the model here because it's "rapid"! -(NOT in DEGRADED context i.e. recognize object in fog) Visual cortex (V1, 2, 4 ) --> Inferotemporal cortex ( PIT, AIT ) via VENTRAL "WHAT" STREAM) --> Prefrontal cortex (Some layers can be "bypassed" depending on feature saliency!

Does machine learning support strong AI?

YouTube knows what you like to watch! Does it have a mind? (not really)

Both Feedforward and Feedback

i.e. feature, letter, and word all interconnected A smaller portion of the original model: -(letters with A and T) -WORD -> one of the words containing "A" and "T" activated in your mind (i.e. TAKE) -LETTER -> only showing a *subset* of the letters (nodes) --> A N T G S ~ A and T are activated at the same time/ in parallel instead of in a serial order) -(if finding words w/ N G S, N G S would INHIBIT TAKE)

Connectionism - what's being connected?

neurons


संबंधित स्टडी सेट्स

Ch 1-5 Question and Answer Study Guide - Night - Ms. Brown

View Set

different types of vegetarian diets

View Set

Ch. 13 Marketing: Helping Buyers Buy

View Set

Chapter 39: Assessment of Musculoskeletal Function PrepU

View Set

Principles of Embalming III - PHT 414 Spring 2021 Midterm Test Questions

View Set