PHIL 246 Exam 3
Nagel's cognitive science challenge
- attempt to explain mind's & body's relationship to consciousness - physical theory of mind → phenomenological features must have a physical account - each individual has a feeling that it is to be like themselves - how can consciousness be studied objectively, when it's inherently subjective from person to person
global positive valence
- being nice to nice ppl in moment, only if they were also nice in the past
local positive valence
- being nice to nice ppl in the moment, regardless of what recipient has done in the past
original sin of cognition
- coined by Sarah Jane Leslie - essentialize grouips of ppl bc can't help selves - generalize from individual members of this groups to the group as a whole - usually only when behavior is strikingly negative
Tomasello's "shared intentionality"
- collaborative interactions in which participants share psychological states w one another -- this means that they have a shared goal, not engaging in mind-melding - sympathy (acknowledges others' suffering) vs empathy (shares others' suffering)
physical theory of mind
- computations are implemented in the brain - ppl have tried to use this to explain consciousness
Google DeepDream
- computer vision program designed to detect faces & other patterns in images w aim of automatically classifying images - can be run in reverse after training - can "hallucinate" essence into an image, iteration after iteration
mind
- 1 component of Cartesian dualism - reg cogitans - thinking thing - subjective, invisible, private - not physical
body
- 1 component of Cartesian dualism - reg extensa - extended thing - objective, visible, public - exists only in physical world
essentialism
- 1 of 4 main pillars of what it is to be human - intuitive theory that categories of social organization are reflected in objective distinctions in nature that are indicative of something deep, stable, informative about those category members
Wug test
- A test of a child's knowledge of morphology (word formation) - children were shown a picture of a Wug, and then shown a picture of 2 of these creatures and asked "There are 2...". - Unanimously the children said Wugs with the plural ending of -s.
Imitation Game
- Alan Turing's "Can a machine do what we as thinking entities can do?" - The proposition that reality as we experience it might be a technological simulation.
mentalistic
- Any explanation of behavior that uses neural system, spiritual, psychic and subjective statements can be classified as - questioned by scientific ppl - evidence is often ambiguous
Cartesian dualism
- Descartes' view that all of reality could ultimately be reduced to mind and matter - mind & body are totally separate - introduces mind-body problem
epistemological problem
- If I believe something to be true that is actually false, then I am in a "false physical state." But physical states cannot be true or false, they just are. - according to Nagel, this isn't the problem of other minds (conceptual problem of how mental states are attributed to others)
dispositional account
- Ks are F is true iff, all Ks are either F or are disposed to be F (in normal or relevant circumstances) - leads to stereotyping in social groups -- judge others based on individual
actual domain
- associated w social kinds - kind or group is essentialized just in case its members are viewed as sharing fundamental nature that causally grounds a substantial number of their outwardly observable properties - not biologically grounded, immutable, or necessary -- race isn't biological - essentialized at basic level -- distinctive & usefully predictive -- this helps generalize across group
intentional stance
- assumption that others are agents motivated to behave in a way that is consistent with their current mental state 1. figure out what organism believes in the world 2. figure out what organism wants in the world 3. figure out how organism will act to achieve its goals in the light of its beliefs - this line of practical reasoning yields a decision about what the agent ought to do
deep neural network
- a complex version of an artificial neural network that is capable of highly advanced problem solving - intensive training over massive data sets
social kind case study
- after 9/11 hate crimes against Muslims increase by 1600% - motivated by desire to harm members of a particular group only bc they're members of a particular group - that fact that the victims were Muslim was enough to justify the violent act all bc 9/11 perpetrators were 19 Arab men
SHRDLU
- an early program in artificial intelligence intended to model natural language processing - could understand language - but reasoning & acting on world was limited - had internal organization - was a breakthrough at its time
social kinds
- application of domain outside of evolutionary purpose - part of theory or prejudice is that we extend basic form of reasoning & representation (biological kinds) to categories of ppl - associated w actual domains
nativist approach to moral development
- approach to moral development - all people & cultures tend to share these same characteristics of moral judgments - harder to explain by appeal to experience - ppl don't just reflect world back, minds shape what is experienced
empiricist approach to moral development
- approach to moral development - exposure to situations in which one is positively helped or negatively harmed - observe others who are helped or harmed - told by parents what's right or wrong
Eliza
- artificial intelligence chatbot development - AI psychotherapist - developed in 1962
Parry
- artificial intelligence chatbot development - simulated person w paranoid schizophrenia - developed in 1972 - 48% of psychiatrists could tell the difference bw this program and a human
large language models
- artificial intelligence models trained with large volumes of text to represent semantic relationships between words - build up world of vectors by analyzing vast swaths of digitized text across the internet - fed word vectors that correspond to sentences into deep neural networks -- remove a word & train model to predict missing word - spurred GPT
Lesion et al
- asks "How much of a role do generics play in essentialist thinking?" - independent variable: illustrated story book where one shows generalizations ("Zarpies like to drink milk."), and the other shows specifics ("This Zarpie likes to drink milk.") - dependent variable: causal mechanism (category-based reasons vs person-based reason as forced choice), flexibility ("sometimes other things" vs "always one way" as forced choice), inheritance ("traits follow category" vs "traits are learned" as forced choice; obviously essentialization) results - essentialist thinking is greater for generic condition than specific condition, and increases over age in both conditions (pg 10)
original domain
- associated w biological kinds - essentialist thinking likely evolved in context of reasoning about the biological world - psychological privileged level at which essence is taken to ground a wide range of properties is known as the basic level -- basic level categories are more predictive of more distinctive properties -- basic way of dealing w dangerous/harmful info involves rapidly generalizing info to salient kind/category at right taxonomic level
Logic Theorist
- development in artificial intelligence - introduced at Dartmouth Conference - was able to create proofs of mathematical theorems that involve principles of logic - used heuristics & other kinds of human-like reasoning strategies to come up w proofs
Leslie's theory of genetics
- disposition to generalize -- evolutionarily good reasons, applies well to original domain -- seem wired to extend application of that mechanism in ways that are obviously inappropriate & harmful - attribute characteristic to kind, not individual bc evolutionarily helpful - generic language is bad, use adjectives rather than nouns
biological kinds
- expect innate potential to prevail over competing environmental influences - insides matter (cow raised w sheep is still a cow) - independent of language (think seed from apple, but planted w flowers, will still be apple even if not called apple seed) - associated w original domain
hidden & unobservable
- feature of essentialism - essences are distinct from their surface observable features - essences are deep
homogeneity
- feature of essentialism - expect all members of K to share certain properties with one another - even if there are myriad other differences that we can detect between those individuals
immutability
- feature of essentialism - expect that it would be exceedingly difficulty, if not impossible, for a member of K to rid themselves of their K-determined properties
innate potential
- feature of essentialism - expect things inherit their essences from their forebears - determined at birth
access
- general category of easy problems - ability to react on basis of some info, so then must be conscious of that info - ex: planned to turn right but stopped when someone stepped in front of your car - may be hard to specify mechanism by which info about internal states is retrieved & fed into action planning systems - goal is to explain how system could use info about internal states in directing later computation processes
reportability
- general category of easy problems - if can report on mental state, then they are conscious of that mental state - "I'm happy" or "I think she loves me" shows awareness - may be difficult to show mechanism by which info about internal states is made available for verbal report - explain how system could perform function of producing reports on internal states
wakefulness
- general category of easy problems - say creature is conscious when they're awake - appropriately receptive to info from environment & use info to direct behavior in an appropriate way
goal-directedness case study
- habituation phase: children watched hand approaching 1 of 2 toys - test event type 1 (perceptual): hand has same goal, but target is in different location - test event type 2 (goal-directed): hand has different goal - conclusion: infants' different reactions suggested that they attributed an intentional relationship bw the object and the world; this difference doesn't occur when done w pincer instead of hand
psychological essentialism
- hypothesis that humans represent some categories as having underlying essence that unifies members of a category & is causally responsible for their typical attributes & behaviors - essentialized theories have been hidden nonobvious, persistent property or underlying nature shard by members of that kind that causally grounds common properties & dispositions (aren't things yo can change just by changing your outer appearance) - essence makes members of the kind the thing that they are
Tomasello's "moral sense"
- it "co-evolves" with the drive for cooperation in social animals - evolutionary biology, anthropology, primatology says cooperatively is a requirement for successful group living across species - species can achieve more together, if some are willing to make personal sacrifices - evolves to sustain collective action & cooperation
Turing Test
- method of determining the strength of artificial intelligence - human tries to decide if the intelligence at the other end of a text chat is human or a computer
moral identification
- moral sense component - ability to ID which actions in which situations are good or bad & to respond appropriately despite personal cost - potentially related to empathetic capacity
moral evaluation
- moral sense component - ability to judge ppl for their abundance or lack of cooperativity or empathy - to be appropriately attracted or repulsed by individuals depending on those judgments
moral retribution
- moral sense component - ability to support or carry punishment of those who act antisocially
Knowleton et al paper
- paper - most: subset-superset relationship - more: subset-subset relationship - "more" → visually segregated dots by color - "most" → mix differently colored dots over each other 2 ways to represent "most pianos are black" 1. #(pianos & black) > #(pianos) - #(pianos & black) ≈ the black pianos outnumber all pianos minus the black pianos 2. #(pianos & black) > #(pianos & ∼black) ≈ the black pianos outnumber the non-black pianos
sociality
- part of being human - tendency to represent others' minds - tendency to represent others as agents - goal is to figure out what makes an agent
counterbalancing
- part of experimental design - method of controlling for order effects in a repeated measure design by either including all orders of treatment or by randomly determining the order for each subject - ex: randomly assign certain characteristics or conditions that you can't otherwise control
p-value
- probability that the results of your experiment were due to chance - the lower the better
LISP computer language
- programmed by Terry Winograd at MIT - accepted commands thru keyboard & responded w English text - manipulated objects in virtual 3D world, depicted on computer monitor - generated syntactic & semantic representations of input sentence, produced from such representations - had basic representations of physical laws to represent what would happen when arm moved
characteristic property
- property of an individual - isn't thought to be different even when artificially changed
intentionality
- relationship between subjective states & the rest of the world - the quality of being directed toward an object
mind-body problem
- result of Cartesian dualism - Are mind and body separate and distinct, or is the mind simply the physical brain's subjective experience?
Baby Intuition Benchmarks
- test of representations of the social world - 2D & 3D videos meant to test AI models on tasks that infants respond to in ways that implicate mental representations of agents' goals, intentions etc - ex: one object preferentially approaches one of 2 other objects, replicating the logic of Woodward's agent preference experiment w infants - DNN models specially designed to be able to represent agents' goals didn't perform like human infants on these tasks -- performed at chance → don't have same common sense understanding of agents like human infants do
non-human agent case study
- tested 1 year olds' gaze-following behavior - tested different combos of face/nonface & contingent/noncontingent behavior - infants followed gaze of novel object whenever it had a face or whenever its behavior was responsive to infants' own behavior - infants only failed to follow gaze when novel object lacked both a face & contingent behavior
mind-reading case study
- tested how well children can understand others' moral actions - children prefer helper over hinderer even in identical physical actions - track multiple mental states & who represents which
moral evaluation case study
- tested the morality play paradigm in goal-failure scenarios - helper & hinderers up the hill - almost 100% of babies reached for the helper over the hinderer - neither 6 nor 10 month olds had a preference bw helper & hinderer when hill climber had no eyes (no shared intentionality) - determining moral responsibility requires consideration of mental states that drove action
moral identification case study
- tested whether infants would show help behavior/empathetic responses to ppl in need as most adults do - 18 & 24 month-olds were significantly more likely to help in experimental (needed help) vs control conditions - contrast bw helping behavior in need vs non-need conditions were more important than the proportion of those who actually helped
characters challenge
- tests differences bw human visual processing & DNNs 1. given 1 made-up character & humans are very consistent in packing out other drawings that represent the same character - current DNNs don't do this 2. given an exemplar, humans are consistent in version they'll produce when generating forms - current DNNs don't do this 3. given an exemplar, humans are consistent in way they represent segmented components of the character & also how they'd turn the representation into an execution of a drawing of the stroke to produce the character - current DNNs don't do this - also applies to other objects - is the reason for there being human vs robot tests that ask you how many stop lights there are in a certain number of images
artificial intelligence
- thinking thing that can reason, learn, predict, plan, adapt, communicate thru language, has morals, empathy, is aware - not naturally occurring, is synthetic
false beliefs
- type of belief - idea that one believes that doesn't match reality
true belief
- type of belief - tea that one believes that matches reality
generative pre-trained transformer
- type of large language model - trained on 500 billion words - deep neural network is more complex w special components that make training more efficient & accurate - current models have 12,288 dimensional factors for each word, 96 hidden layers w 49,152 hidden layer nodes, 175 billion parameters get tweaked during training - don't seem to have concepts that can enter into logical reasoning
Role-filler independence
- type of relation - something is on something else - abstract idea
internal states
- unobservable - not perceived like rocks or cows, ok bc believe in unobservable things like prime #s & atoms - have special relationship with the world, that other unobservables don't - are intentional objects
vector
- word represented by long lists of numbers - represents the context in which a word is found to occur across a text -- represent immediately preceding contexts for words - can be used as input to deep neural networks
Pix2Struct (q)
What is 1 image-to-text model of artificial intelligence?
David Chalmers' hard problem
-The hard problem is understanding how subjective experiences arise such as the experience of the quality of deep blue or the sensation of middle C -Chalmers: Even when you've explained all neural mechanisms and functions related to experience, two questions remain: 1) Why is the performance of these functions accompanied by experience? 2) How physical processes in the brain give rise to subjective experience?
Nagel's Bat
1. If brain states = mental states, everything known about brain should be known about the mind 2. It is not possible for this to happen 3. Mental states cannot equal brain states ex: you can know everything about being a bat (brain) but you cannot know everything about what it is like to be a bat if you are a nonbat (mental) - bats' experiences may seem weird so we may doubt their reality or downplay its significant - can understand schematically, through metaphorical comparison - pretending to have an experience isn't the same as having it
Sora (q)
What is 1 text-to-video model of artificial intelligence?
biological kinds (q)
What is more likely to be true - generics about biological kinds or social kinds?
Sally Anne Task
A false belief task Sally has a basket, Anne has a ball. Sally puts a red ball inside the basket. Then, Sally leaves the room Anne saw Sally put ball in the red box. AS SALLY LEAVES ROOM, Anne takes ball out of the basket and places it into box you ask a child after telling them this scenario: when Sally comes back, where is she going to look for the ball? Children WITHOUT THEORY OF MIND will think that Sally will look in box bc that's where ball is Children can't really be successful at this task until about 4. Children under 4 are unable to consider the situations of others
false beliefs (q)
Because of ambiguity of mentalistic evidence, what kind of beliefs did Dennet say could provide convincing evidence for attribution of mental states?
intentionality (q)
Besides consciousness, what is the hardest problem in consciousness?
both (q)
Can behaviors that are perceived as mental states in others be interpreted in mentalistic or non-mentalistic ways?
yes (q)
Can two identical physical actions have different moral motivations, depending on the context?
systematicity (q)
Deep neural networks fail at things having to do w reasoning & evidence, bc of their lack of _____________________.
discourse (q)
Deep neural networks' failures indicate that human ________________ - what they're trained on - doesn't totally encapsulate human reasoning across domains.
language of thought (q)
Due to their lack of systematicity, it seems that large language models don't have a _______________ ______ ________________.
ambiguous (q)
Large language models have trouble parsing the meanings of _______________ sentences.
psychological (q)
Focus on what kind of mental states is fundamental to the domain of moral judgement?
no (q)
Have DNNs been applied to visual versions of social reasoning / morality tasks yet?
flexible (q)
Human visual processing is better that deep neural network processing because it's more _________________ with the concepts it allows.
directed at world, are about things, have content (q)
In what 3 ways do internal states bear a special relationship with the world, signifying that they are, in fact, intentional objects?
inner principles (q)
Is moral worth seen in terms of visible action or inner principles?
ChatGPT, Bard (q)
What are 2 language models of artificial intelligence?
DALL-E, Midjourney (q)
What are 2 text-to-image models of artificial intelligence?
hidden & unobservable, homogeneity, immutability, innate potential (q)
What are 4 features of essentialism?
original domain, actual domain (q)
What are the 2 domains of essentialized kinds?
moral identification, moral evaluation, moral retribution (q)
What are the 3 components of the moral sense?
conscious, social, moral, essentialist
What are the 4 main pillars that describe what it is to be human?
reportability, access, wakefulness
What are the general categories of easy problems?
generative pre-trained transformer (q)
What does GPT stand for?
characters challenge (q)
What's a good example of how human visual processing can accommodate more variation than deep neural networks?
Van de Vandervoort paper
What's it asking? - do preschoolers consider one's intention rather than the outcome of sociomoral decisions What's the author's answer? - exp 1: yes, like those w good intentions & punish those w bad intentions - exp 3: pick character that brings out better outcome What's the experimental evidence? - exp 1: significant psychological development bw 3 & 4 year olds - exp 3: 4 year old cares about efficacy (want to help, but are you good at it) more than 3 year olds What objections can we make?
John McCarthy (q)
Who coined the term "artificial intelligence" in 1956?
original, actual (q)
biological kinds : ______________ domain : : social kinds : _____________ domain ?
Hafri et al paper
procedure - given target image of 2 objects w certain relationship & forced to choose whether other images are target image in very little time - knife in cup is thought to be similar to bone in basket bc containment relationship - used reaction times conclusion - objects are visually organized in ways that relate them to each other - studied how visual processing encodes HOW objects are related to each other -- the mind automatically picks up on how objects are related to each other beyond just their physicality