minds, machines, and persons final

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

Dennett vs Block

Dennett - if it passes the quick probe it is intelligent, a creature that can solve any problem given enough time - a million years - is not intelligent at all, intelligence involves the ability to do impressive things despite one's limitations His TT proposal: a being is intelligent if it has the capacity to produce a sensible sequence of verbal stimuli constrained by the laws of nature - blockhead doesn't respect laws of nature so is not intelligent Block - Blockhead shows that passing isn't enough, have to know what's inside the box, conceptual truth that intelligence is NOT a black box and we need to examine inner workings, does not matter if physically possible, it refutes TT concept of intelligence

Reply - Brain-bound mental states having derived intentionality and response

Ex. imagining a venn diagram, visual imagery represents an overlap because it is interpreted by other mental states so by parity principle, derived intentionality can't rule out ottos extended belief Response - but visual imagery has SOME intrinsic intentionlity, and ottos notebook has no intrinsic intentionality at all and notebook's lack of content when decoupled cant rule out extended belief

AlphaGo Zero example

First algorithm to defeat a human world champ in chinese chess equivalent Learns by: studying human expert moves (supervised learning) and playing itself (reinforcement learning), it then taught itself chess and was amazing Turing Test (quick probe) = conversation requires intelligence in indefinitely many domains, vs AlphaGo = has mastery of multiple games (go, chess, shogi etc.) but is limited in discussing latest match, understanding politics, art etc. Vs GPT 3 = massive deep neural network trained on billions of words, does ok on wide variety of tasks, but struggles with unusual prompts and logic

Deep connectionist neural network (DCNN)

Have 1. depth 2. 1. hierarchical: hierarchical feature composition 3. 2. abstraction: adjustment for nuisance variation

Can distinguish 3 things when deciding to punish someone

Identity - punish only if they are the same person that did it Mental change - makes you completely forget what you did, different person With sleeping drug case 1. Is person the same person, 2. Did they have the right kind of mental state where they had control/intent 3. And if no control/intent, the past version of you took the drug and should've known the consequences - that is negligence (tracing theory of moral responsibility) You are responsible at t1 bc of control/intent, t2 you should have known better at some time in the past Locke - all relevant because moral intuitions we have are evidence for which theory about identity is correct

Locke contrasts identity conditions of people with masses, artifacts, and organisms

Mass remains identical over time IFF retains the same parts, ex. lump of clay, heap of sand Persists - you can flatten/roll clay Destroyed - remove piece of clay BUT people remain num id even though parts change - molecules change, age Artifact remains same IFF parts continue to be arranged in a way to satisfy externally defined functions, ex. phone, watch Persists - can get new screen Destroyed - when stops working BUT people remain same even though external functions can change, ex. child > employee > dad Organism remains same IFF parts continue to fulfill functions for life, ex. tree, mammal Persists - biological life Destroyed - death

Why type identity theory?

Motivations 1. supports general theory of pain (not just zac's) 2. other sciences discover type identities if pain = CTO cant have pain without CTO or vice versa ex. water = h2o, cant have water without h2o

Anterograde amnesia

Patient loses ability to encode new memories after brain damage At each moment in time, can remember everything that happened before accident, but what he cant do is form new memories of experiences Any two times after the damage t1 and t2 are not identical there is a forgetting that happens from one time to the next

Retrograde amnesia

Patient loses all episodic memory from before brain damage Locke would say that this severs the connection between p1 and p2 Moral and legal implications - you dont get credit for anything you did before and you cant be punished for anything you did before for any p1 existing before the brain damage and p2 existing after damage p1 != p2

Memory Theory of Personal Identity

Person x at t1 = person y at t2, if and only if, y has episodic memories of events experienced by x

Person as a forensic concept

Personal identity is intimately tied to moral + legal responsibility ( a 'forensic' concept) Test: does memory theory capture moral/legal data? Identity + responsibility Its justifiable to praise or blame you for a past action only if YOU did it The destruction is forgetting

physical vs logical possibility

Physical possibility P is physically possible IFF P is consistent with the laws of nature Logical possibility X is logically possible IFF X can be imagined without internal contradiction

Theories of mind should...

Putnam: 1. allow for multiple realizability 2. explain what cross species mental states have in common

The Turing Test (first pass)

Question: What is intelligence? The test: a judge questions both a human source and a computer source, and a computer (or other being) is intelligent IFF it has the capacity to succeed at the imitation game (the judge can't tell them apart) Why test language: conversation can go anywhere

Blockhead thought experiment

Result: Produces correct behavior and passes the TT, but inner workings aren't intelligent How it works: stores every conversation that two english speakers could have for an hour, the assumption is that Blockhead is logically possible because sensible conversations are finite 1. Blockhead can pass the TT 2. Blockhead is not intelligent - all the intelligence is that of the programmers 3. Therefore, passing the TT is not sufficient for intelligence

Human chauvinism reply

Simulated X is sometimes real X ex. Light, flavoring Saying that simulated minds aren't real = human chauvinism because computers might think differently than humans and still be intelligent

Inductive

a type of reasoning that presents cases or evidence that lead to a logical conclusion Inductive - very likely to be true but not guaranteed

what about token identity theory?

advantage - allows some multiple realizability disadvantage - android/alien pain without brains? and what do different tokens of pain have in common then?

Brentano - intentionality as a mark of the mental

all and only minds have intentionality, thats how mental states have meaning

Human embodiment

body with nervous system makes and maintains itself in precarious environment, unlike bacteria, nervous system can change whats meaningful OG intentionality arises through embodied sense-making, environment gets intrinsic meaning relative to our embodied self-maintenance Chinese room = no OGI bc not alive Robot+systems = yes if robot makes and maintains its own body

Original to derived intentionality

claim - all meaning (intentionality) comes from things with mental states question - how do objects (words, pics) get derived intentionality? who interprets them? answer - derived reps are interpreted by minds with original intentionality

Empirical claim

claim about facts or reality based on experience or observation

Embodied AI

computer that is embodied and embedded in the world in certain ways has mental states searle is right in that syntax is not sufficient for semantics (it also requires embodiment and embedding) wrong bc we are a computer just one embodied and embedded in the world

What systems have OG intentionality?

connectionist view - connectionist systems decomposed ctm - decomposed symbolic programs adams and aizawa - brain bound mental states (not external objects) intentional stance - nothing because all have to be interpreted enactivism - living body making sense of a precarious environment

Moral risks

databases are not curated to avoid biases, ensure truth, or encompass diverse perspectives

Memory theory captures why Jekyll and Hyde is a dilemma

go free - freeing guilty man who committed crime jail - imprison man who has no memory of committing crime body theory would be no dilemma jail him its that man in that body who committed crime 1. Jail - for the one person who has split personalities - body theory, that human being committed crimes and should go to jail Locke - outie is being punished for something they didn't do, don't have any memory of The outie is the victim of what the innie is doing 2. Free For justice cant let innie go free 3. Asylum Involuntarily committing Well that's not good for people either Locke - there is NO good option! = moral dilemma

Identity is a transitive relation

ike tyke = officer ike = general ike officer remembers tyke, and general remembers officer even though general forgets tyke Locke's memory theory entails contradiction - by relativity tyke should = general, but no because general does not remember tyke

Autopoiesis

living organisms are self-organized and self-maintaining --> life exists in precarious environments, without self-maintaining it dissipates --> life survives by making sense of the parts of environment that aid and hurt autopoiesis Dennett - useful to talk as if sucrose is food that bacteria seeks Varela - sucrose is food intrinsically from bacteria POV

Deep Continuity Thesis

mental states have OG intentionality bc we are alive, it arises from sense-making, gives environment intrinsic meaning from organisms own perspective

numerical and qualitative identity

numerical - X and Y are numerically identical IFF they are the same object qualitative - X and Y are qualitatively identical IFF they have all the same properties

objects and properties

objects - individual things ex. Mars, professor, mug properties - qualities, features, attributes of object ex. professor wears glasses, has a brain

Memory theory motivation - consciousness theory

people are distinguished from other kinds of objects because they are conscious thinking beings A person existing at earlier time t1 numerically identical to later person t2 IFF those people share the same consciousness And its really as far back as this consciousness extends to any past action or thought that is how far the identity of that person extends, and it extends backwards through episodic memory

memory connectedness

person p1 at t1 and person p2 at later time t2 are memory connected IFF p2 has episodic memories of events experienced by p1

memory continuity theory

person p1 at t1 and person p2 at later time t2 are memory continuous IFF p2 is the last link in a chain of persons beginning with p1 such that each person in the chain is memory connected with the preceding person AND so person p1 and person p2 are numerically identical IFF p1 and p2 are memory continuous

psychological continuity

psychological states other than episodic memory are arguably more important for personal identity ex. tractor jack 1. pre accident = morally mean and selfish 2. brain damage from accident changes jacks personality 3. post accident morally virtuous kind generous SO is he the same person before and after? is this as impactful as memory? YES most would say change in morality would mean higher degree of identity change than amnesia Two people are psychologically continuous with one another if they form part of an overlapping series of persons who are psychologically connected with one another. *psychologically connected = enough of p2's mental characteristics and states are caused by p1's characteristics and traits

Intentionality

representation, aboutness, directedness mental states have meaning BECAUSE they are representational or intentional ex. a belief perception is directed at a situation of state of affairs

General question about defining mental states

should we define them in terms of low level decomposed functional computations (store quickly decaying in WM and some to LTM) or high level abstract functions (ex. store beliefs to retrieve later) decomposition pros - likely same psychological laws, rule out unintelligent computations like blockhead, chinese room cons - neural chauvinism - cant different creatures have beliefs/desires with different algorithms

Conceptual claim

statement that identifies the meaning of a word or phrase

Enactivism

states that cognition arises through the interaction between an organism and its environment - we have original intentionality because we have a body, that is we are a living organism - there is intrinsic teleology, goals from own POV, goals that serve our overarching telos to stay alive

systems reply dubious assumption

the MBP assumption is that if gertie cant do x, then no part of gertie can do x if gertie the woman does not understand Ch then no part of gerties brain with the internalized rulebook can understand chinese Counter - gertie cant process 400 billion bits of info per second but her brain can, so systems reply: if gertie internalized the notebook gerties brain would understand chinese even if she still does not

episodic memory

the collection of past personal experiences that occurred at a particular time and place ex. high school graduation

Teleology

the explanation of phenomena by the purpose they serve rather than by postulated causes. - are organisms intrinsically teleological or is it just useful to talk as if they are - organisms strive to accomplish certain ends (reproduction, food, safety, shelter) - ex. we would say that bacteria swim up the sugar gradient to get food, but intrinsically bacteria is just a set of cellular mechanisms Intrinsic - we intrinsically have intentional mental states Interpretation - its useful to talk as if we have intentional mental states

Robot + System reply

the whole system - robot interacting with the world - has meaningful mental states, not Gertie

precise question of personal identity

what is it for person at t1 to be numerically identical to person at t2 - remain the same person over time - not qualitative identity because people's properties change over time

Robot reply to chinese room

- Claim: robot would have meaningful mental states - computer inside a robot that operates the robot to perceive, move around, speak etc - symbols attach to things - analogy - we learn what words mean by naming things we can point to look at interact with etc.

bounds of cognition, functional differences

- EMs are logically and physically possible, but no human has an extended mind - Ottos notebook does NOT play the same role as brain bound mental states 1. intrinsic v derived representation Inga's brain state has intrinsic intentionality, represents that MOMA is on 53rd without interpretation Otto's notebook has only derived intentionality - arbitrary words represent that MOMA is on 53rd bc otto interprets the words 1. mental states should have intrinsic intentionality, 2. ottos notebook has only derived intentionality, 3. therefore ottos notebook does NOT contain mental states

Non-mental representation and Derived intentionality

- a lot of non-mental things represent like books, words, paintings, graphs - non mental rep has derived intentionality DI - represent things because of how they are interpreted (by a mind) ex. words, arbitrary, no intrinsic connection to what they represent, linguistic community interprets a symbol (word) to represent something

CTM hardware

- a mental state == whatever physical state computes the function characteristic of that state - humans input output of pain computed by CTO and octopus/android input output computed by different brain states, they all compute the same function

multiple realizability

- a mental state like pain could be made up of different physical states across species - CTO in humans, something totally diff in octopuses - the same mental state can be made up of many different types of physical states so one CAN have pain without CTO pain != CTO

Branching reproduction and Immortals

- branching brett - survives less with each generation - the psychological connections wane each generation, memories weaken with passage of time, ambitions change - fewer and fewer direct psychological relations to an individual in his tree 1. brett's psychological connections wane each generation 2. p2 survives p1 to the extent that they are psychologically connected 3. therefore brett survives less with each generation 4. brett's survival comes in degrees EVEN in immortal life survival comes in degrees in a single life over time we do not fully survive big life changes and transformative experiences

Otto and Inga

- case where Otto's notebook satisfies parity principle Inga - wants to see art exhibit at MOMA, recalls that it is on 53rd street and walks there - she believes that it is there before calling fact to mind, stored in her long term memory Otto - has alzheimer's so carries notebook writes things down when learn, looks for information when needed in it - Otto wants exhibit, looks up address from notebook and walks there - the notebook plays the same functional role in his walk to MOMA as Inga's brain-bound belief + parity principle = therefore Otto's notebook stores some of his beliefs

1. hierarchical feature composition

- classical ANN's are shallow and have only a few layers separating input (dog image) and output ("dog") - DCNN's are deep: have many layers separating input and output (250+), detect progressively more abstract features such as contrast, lines, shapes, edges, each step involves multiple specialized layers

Extended mind argument

- if mind is software, hardware is brain AND EMT is that hardware extends into objects outside the body (Smartphone, notebook, etc) ex. a hard drive could be in body of computer or external, but it doesn't matter where drive is located just its functional role in computer programs

Haugeland - strong AI

- if you take care of the syntax, the semantics will take care of itself, syntax is sufficient for semantics - the computer follows complex enough syntactic rules about how to manipulate symbols that it gives symbols meaning (semantics) - complex computations give mental states meaning

MBP response to robot reply

- in room in the robots skull, symbol manipulation - as long as gertie is a formal computer program she has no way of attaching any meaning to any of the symbols, the robot being engaged in causal interaction with outside world doesn't help her

Deep neural network (DNN)

- incorporate insights from perception, attention, memory, social cognition - distinctive features = 1. depth, 2. hierarchical structure, and 3. abstraction

primacy effect and recency effect and human chauvinism reply

- it is a law of memory that when people remember a list of words the first and last words are recalled best Primacy - the first words have already been committed to long term memory Recency - the last, most recent words are still stored in short term (working) memory - these laws apply to inga's memory but not otto's, this extended theory is unlikely to find unified psychological laws Is this human chauv? androids would committ everything to memory no primacy or recency, data has no memories/beliefs, if anything more beliefs and better memory

Mental representation and Original intentionality

- mental rep is special because it has original intentionality - mental states represent things intrinsically they don't need to be interpreted - mental states have meaning when they have original intentionality - ex. liz sees a cat, no one needs to go inside her mind and interpret that experience OI - represent things intrinsically, dont need to be interpreted ex. mental states

Mind-Brain Identity theory

- mental states are type-identical to brain states - a mental state = a specific brain state - ex. pain is this brain state - the inner workings of the mind make all the difference - what matters for a mental state are brain states

Computer theory of mind - mindware

- mind is to brain as software is to hardware --> software has functions/rules about how to manipulate symbols == mindware has functions/rules about how to manipulate internal mental states - takes in input and produces output ex. input is bodily damage --> produces outputs of belief, desire, etc. - allows for multiple realizability - same mindware can run on lots of hardware - shows commonalities, same software - mental states have syntactic properties and semantic properties

Decomposition objection

- otto's notebook has the same functional role as inga's memory only at an abstract level of description - once we decompose inga's memories into simpler functions they 1. compute different functions than otto's notebook and are 2. subject to different psychological laws

Illusion of identity apple example

- our cognitive faculties confuse similar/causally related bundles with identical bundles - gradual change is compatible with numerical identity over time but rapid change is not theres no difference between the changes though so intuitions are not to be trusted as guide to numerical identity over time the gradual change of people over time, bundles of mental states are causally connected and this creates the illusion of personal identity over time - this is unstable intuition and cannot be trusted

Picture objection

- pictures represent things intrinsically, they dont need to be interpreted - so not all non mental representations are derived

Hume

- skeptical about personal identity - shares a lot with buddhist no self theories

Parfit survival theory

- survival has 2 features 1. not transitive B and C can both survive you, but dont survive each other and aren't identical 2. comes in degrees identity is all or nothing, a=b or a!=b survival can be a little or a lot

Systems reply to chinese room

- the whole system understands Chinese, not just Gertie (part of whole system) - searle argument for p1 seems logically invalid 1. no amount of syntax by woman will enable her to understand Chinese but then 2. no amount of syntax will enable wider system to understand Chinese?

Luminous room thought experiment

- there are limits to intuition in scientific theorizing - scientific theories may seem absurd when we're unfamiliar with phenomena and consider only simple instances of phenomena - argument against the view that light is an EM field (Maxwell) - gertie is shaking a magnet but no light, oscillation of magnet is way too slow duh 1. electricity and magnetism are forces 2. light has luminance 3. electromagnetism is not sufficient for luminance 4. therefore, electricity and magnetism is not sufficient for light

Dennett Interpretive stance - view of mental states not having OG intent

- they don't actually, they rely on interpretations - macro patterns that dont really exist ex. glider, talking as if that exists, it really doesn't we highlight real patterns - nothing in the game of life is intrinsically a glider before we interpret it by talking as if macro patterns exist we highlight real patterns in the behavior of game of life BUT gliders dont exist intrinsically before we interpret them

Octopus example for multiple realizability

- they have tool use which shows beliefs, desires, and intention - play and curiosity - pain BUT - brains are completely different, small central brain, 1 ganglia in each arm controls motor movements autonomously

Chinese room objection - Searle

- thought experiment to show that strong AI is false - syntax is not sufficient for semantics and original intentionality - Chinese room simulates a program that obeys all syntax rules of Chinese but has no semantic understanding of Chinese - Gertie (the operator) does not understand written Chinese, the symbols look like meaningless squiggles, she has a rulebook that says how to manipulate the symbols - there are Chinese speakers on either side of the room and the operator's outputs are sensible responses to the inputs - gertie runs a program that obeys written Chinese syntax but does not understand it, it is meaningless - if going through this program is not enough to give understanding of Chinese then it is not enough to have syntax

Nuisance Variation and Intentionality arg - Cetic

1. intentionality requires aspectual shape 2. DCNNs have aspectual shape because they adjust fr nuisance variation (represent the relevant aspects not irrelevant ones) 3. DCNNs learn the relevant aspects on their own 4. So do they have original intentionality?

Parfit challenging beliefs

1. maybe not all questions about personal identity have definitive answers 2. important practical questions such as survival and responsibility maybe don't hinge on personal identity

octopus objection in standard form

1. mind-brain identity theory implies that pain (mental state) = CTO (brain state) 2. if x = y then something can't have x without y 3. octopuses nervous systems do not have CTO (brain state) 4. therefore octopuses do not feel pain (mental state) (from 1, 2, 3) 5. but octopuses do feel pain (mental state) 6. therefore there is a contradiction

Parity principle motivations

1. multiple realizability - mental states can = whatever physical state computes the relevant function (does not need to be brain) 2. no neural chauvinism - boundary of skin and skull is important only if identify a theoretically relevant difference between internal vs. external states

Parrot abductive argument

1. observations: LLMs seem coherent, are very large, just manipulate symbols, humans are gullible judges 2. explanations: AI has meaning or is a stochastic parrot 3. stochastic parrots provide the best explanation of the experimental evidence 4. therefore, stochastic parrots is likely true

Standard form argument

1. premise 2. premise C therefore, .......

That we do not survive is liberating for...

1. selfishness - why would we care about focusing only on ourselves for our future if we will be different - we dont give charitably, destroy planet for future generations BUT it is rational to pursue one's future self interest over others 2. fear of death - the fear of distant death and regret is only strengthened by beliefs about personal identity

Problems with individuating bundles

1. splitting minds - if self = bundle of experiences how many bundles are there in your mind (selves), one, two, three? depending on how categorize experiences ex. split visual experiences with memories and imaginings - gives no way to individuate brains into selves 2. blending minds - bundles seem to persist because earlier mental states remain and others cause each other - distinct people can have similar mental states and causally influence other mental states

Response to systems reply

1. suppose gertie internalized all rules in rulebook 2. she still wouldn't understand Chinese 3. but everything in the room is now in gertie 4. therefore the room does not understand Chinese

Chinese argument master argument

1. syntax is not sufficient for semantics (OG intentionality) 2. computer programs are entirely defined by their formal syntactical structure 3. minds have semantics (OG intentionality) 4. Conclusion that no computer program by itself is sufficient to give a system a mind strong AI is false

LLM architecture

1. transformers - predict the NEXT words in a sentence, use *intermediate representations* - translate text between languages, taken input sentence and use *hierarchies of more abstract representations* of sentence meaning then output 2. embeddings - represent similarities between words - in an n-dimensional space where things can have roughly n features - deeper --> more abstract, and capture world knowledge 3. context and attention - problem: word meaning depends on context - DNNs attend to different parts of the sentence (attended parts influence word meanings) - attention --> embeddings which capture context with hierarchies and abstraction layer 1 = initial attention on parts of speech layer 1 = initial attention on similar concepts layer 2 = deeper context and understanding of situation

Abductive

= inference to the best explanation 1. observations are made 2. explanations are considered 3. best explanation considered (most likely given observations, simple explanation) 4. therefore, best one is correct

The revised Turing test

A computer (or other being) is intelligent IFF it has the capacity to "produce a sensible sequence of verbal responses to a sequence of verbal stimuli, whatever they may be" Sufficient AND necessary Intelligent if language - anything that can produce sensible responses is intelligent Intelligent only if language - there is no intelligence without capacity to produce sensible responses

Psychological theory

A person P1 existing at time t1 is numerically identical to a person P2 existing a later time t2 if and only if P1 is psychologically continuous with P2 memory theory is just a special case of this (saying memory is the only connection that matters)

Deductive

A type of reasoning that constructs general propositions that are supported with evidence or cases Deductive - if the premises are true, conclusion must be true

2. Abstraction, Aspectual Shape, Nuisance variation

Abstraction Aspectual Shape: the core features of intentionality, being able to have intentionality representing certain aspects of an object not others Nuisance variation: variation in images that a network must ignore for correct classification

Locke on Hume

Agrees - num id depends on mental states (memory) - psychological theory - personality, values, beliefs Disagrees - people persist through change, psychological connections ground personality over time - Bundle theory conflicts with intuitions about moral responsibility and personhood, persistence of selves, persistence of objects, individuating minds - Intuition provides us with evidence about the truth Buddhist philosophy = self emerges from mental particulars

Connectionism version of CTM

Any suitably programmed connectionist computer would literally have a mind with original intentionality and our brains are such a computer

Dennett's combinatorial explosion reply

Assumption is that Blockhead is logically possible bc finite number of sensible conversations, but the problem is that finite numbers can be extremely large, hundreds of billions of conversational exchanges so Blockhead is not physically possible, it could not exist given the laws of nature His proposal: passing the TT in any physically possible way is sufficient for intelligence, perhaps intelligence is performing under nature's constraints

Reply to parrot - meaning in AI?

BUT AI also has aspectual shape and represents novel properties - its weird that algebra could create meaning but algebra can create novel representations with aspectual shape (the kind of thing that has meaning) humans - get aspectual shape from sensing and interacting, relationships, abstracting LLMs - get aspectual shape from learning relationships, abstracting

Quick probe reply (Dennett)

Block underestimates the power of the TT Quick probe test - anything that passes the TT displays intelligence across an indefinite number of domains Holding a conversation requires vast background knowledge and abilities, requires intelligence in indefinitely broad domains The simple task to interpret two sentences, need world knowledge about students and RA's

Block's logical possibility reply

Blockhead is logically possible and that teaches us something about the concept of intelligence, logical possibility teaches us about concepts

Body Theory and Brain theory

Body = Person is numerically identical IFF has the same living human body Brain = Person is numerically identical IFF has the same living human brain

luminous room implication for CTM

CTM may seem absurd when 1. we're unfamiliar with phenomena (computation) and 2. consider simple instances of phenomena (hand symbol manioulation vs complex computations)

Gullible Judge objection

Chatbots use silly tricks to win the imitation game and the judge is partial to believing them Bender: - our sociability and charity, tendency to attribute mental states makes us gullible - human understanding of coherence derives from ability to recognize beliefs and intentions within context

Connectionism reply and basic units

Churchlands - symbolic computations (squiggle -> squoggle) are too simple to create minds, minds are connectionist programs (artificial neural networks) Computation units - symbols representation - local: as individual symbols Connectionist units - neurons and connections to other neurons representation - distributed: patterns of activation - modeled on brain, brain is a very complex connectionist network - distributed representation = representations aren't local, individual neurons don't represent things, they are *distributed*, patterns of activation

Resemblance theory of pictures

Claim that pictures represent things because they resemble those things 1. picture P represents something X IFF P resembles X - necessary and sufficient - no need for interpretation Resemblance vagueness problem - resemblance is vague, everything resembles everything else in infinitely many ways ex. a sock and venus Resemblance symmetry problem - if X resembles Y, then Y should resemble X - but person resembles portrait but doesnt represent her portrait, its other way around Resemblance reflexive problem - everything maximally resembles itself - but most things dont represent themselves C = pictures represent things because a linguistic community interprets them as a representation

Strong AI theory

Claim that syntax is sufficient for semantics and original intentionality Weak AI → Strong AI analogy Weak = computer is a valuable tool that can allow us to simulate minds Strong = a suitably programmed computer literally has mental states (with OG intentionality), CTM "we are a computer like this"

Training objection and arg

DCNNs require a lot of data ex. AlphaGo learned Go in days by studying 160,000 games, and self played millions more Arg 1. intelligence requires learning under spatial and temporal constraints (Dennett) 2. DNNs have few spatial and temporal constraints 3. So DNNs are not intelligent, and have no mind Reply - Buckner - they dont literally require the whole universe like blockhead - humans also have access to a lot more data than we think, we are processing so many things over time

Perturbed and rubbish image experiment

DNNs that otherwise function well consistently make some classifications that make no sense to humans, ex. given a perturbed image of a panda it classifies as completely different animal or also given a rubbish image of static it will classify as 5 different animals - undermines motivation for thinking that DNNs have intentionality, that they represent relevant aspects of an image, does static have relevant aspects of an armadillo? BUT could be human chauvinism bc humans can be tricked by perturbed images too and its chauvinism to say OUR concepts are correct ones

Adversarial examples objection + Cetic on OG intentionality

DNNs trained only on adversarial images can accurately classify natural images (ex. adversarial to cats --> cats) Original intentionality: - Ilyas experiment shows that DNNs have ways of representing things that humans cannot comprehend - so DNN representations are NOT interpreted - so DNNs have original intentionality

mental states

visual perceptions, pain, beliefs, desires, imaginings, hopes, fears, intentions, experiences

Parity principle

- CTM should allow that mental states are identical to external states such as phones - suppose a state of an external object (smartphone) computes a function that a subject uses to perform a task - if that exact function were computed by a brain state B we would say that B is a mental state Therefore we should say that the external state is a mental state

Bundle theory - Hume

- A belief that the self does not exist; it is merely the perceived set of relations between experiences through resemblance and causation. - self is a bundle of mental states - perception, memories, emotions etc. A bundle of mental states B1 is numerically identical to a later bundle B2 IFF B1 and B2 contain exactly the same mental states AND mental states always change moment to moment so we never remain the same bundle over time

Large Language Model (LLM)

- An AI model that is trained on large amounts of text to identify patterns between words, concepts, and phrases so that it can generate sensible responses to prompts - they have abstract layers that capture more relevant aspects of the situation and context at hand

Parfit duplication cases

1) you are B not C These two seem arbitrary 2) you are C not B 3) you are both - not aligned w transitive because b != c b=a and a=c then b = c If the relationship we care about is identity (transitivity), that is already not held true Survival is not about identity 4) you are neither you have ceased to exist We already decided the things we care about, if those are retained 5) b and c form a composite person P and you are equal to P - these are independent ppl and cant be held responsible for other 6) two people sharing one body that split off? None are actually wrong, because the original assumption that every question about how personal identity carries is the thing that's wrong

Necessary vs. Sufficient

1. A necessary condition for "s" is something that has to be true for "s" to be realized. 2. A sufficient condition for "s" is something that if true, guarantees "s" (ONE OF MANY POSSIBLE SITUATIONS). All of the necessary conditions have to hold for "s" to be true.

Simulation objection

1. A simulated X is not X 2. Passing the TT shows only that a computer can simulate intelligence 3. Therefore, TT is not even sufficient for intelligence

Connectionist chinese room objection

1. DCNNs are fundamentally matrix algebra that corresponds to symbols and manipulates them 2. suppose gertie has a book of all matrix algebra in DCNN that represents chinese symbols 3. doesnt give the symbols meaning Bender - languages are systems of signs (pairing of form AND meaning), training data for LMs is only form they do not have access to meaning so claims about model abilities must be carefully characterized - no amount of matrix algebra will give meaning to symbols represented by algebra, need a mind to interpret symbols, give meaning, and have them latch onto things outside the system

Hume bundle argument

1. I experience myself only through my mental states 2. So my "self" is a bundle of mental states 3. B1 = B2 iff B1 and B2 contain all the same mental states 4. Mental states change from moment to moment 5. So my self does not persist over time

Memory theory on 3 objections from before

1. Non human Ex super smart parrot, intelligent, rational Non-humans CAN be people if they have episodic memories 2. body/brain swap Body swapping occurs IFF episodic memories transfer into new bodies 3. Life after death Remain the same person post-resurrection, so long as you have episodic memories of your old life Possibility hinges on questions about resurrection/death not personal identity Inference to the best explanation - give data points, cases, and give the best explanation ( Lock's explanation, finds that his theory is the best and to accept it)

body theory objections

1. Non-human persons Logically possible for a parrot to be a person, can talk Maybe some animals are as intelligent as humans and should be treated as such, with complex intellectual minds But because non-human they are excluded from body theory only living human body living human brain We want to be more specific to people though? Not all organisms count as people A person is a human being → human chauvinism rules out possibility that robots are people, alien is person not necessarily based on empirical evidence observable evidence, can't build it into definition without observation and evidence Person is actually conscious thinking thing, what is it for that to persist? 2. Body/brain swapping So just having a functioning human brain doesn't constitute the same person The personhood goes not where the body goes, but where the mind goes, not the physical body and brain 3. Life after death Resurrection, does not need to be resurrection of the same human body Heaven - soul goes somewhere else The question is about what happens to your mind after death Body theory says that if your human organism is gone, your mind is gone

Regress Objection to Intentional Stance

1. assumption that we can say a person P has a mental state only if some other person D interprets P's behavior 2. interpretation is an activity that requires mental states (beliefs, desire, opinion) 3. therefore some other person DD must interpret the interpreter D's behavior ...some other person DDD must interpret the interpreters interpreter DD behavior - intentional stance creates an infinite chain of ever more complicated interpreters but its implausible that such a chain exists - how to stop regress? admit there is OG intentionality before needing interpretation

Behaviorism

Intelligence is defined by whether you can produce a certain kind of behavior (language) NOT defined by whether your brain is made of meat or silicon or how you produce language

drunkenness problem

Locke - Post 2am sober Billy is not the same person as 12-2am Billy??? Because sober billy does not remember at all

Algorithmic bias and policing

Machine learning algorithms learn racist, sexist, homophobic biases from their training set, don't have values from which to distinguish acceptable from unacceptable speech and risky to use in policing because predictive algorithms could be racist

Syntax vs Semantics

Syntax - formal properties of symbols like letters, shapes, grammar - computers operate on this, they dont need to know the meaning of symbols they just know the rules to manipulate them - syntactic rule for how to manipulate mental states to produce other mental states - manipulates symbols according to syntactic formal rules that preserve... Semantics - the meaning involved in symbols, words, sentences 'dog' == 'chien' same semantic content - properties like rationality and truth

Inarticulate intelligence objection

The TT implies that language is necessary for intelligence, but animals/children are arguably very intelligent and lack language Reply 1 Narrow meaning: there are broad and narrow meanings of intelligence, broad: includes learning, self-knowledge etc, while narrow: requires reason and rationality and so language and TT are necessary for narrow intelligence Reply 2 Sufficient not necessary: passing TT sufficient but not necessary for intelligence, so any being that passes is intelligent, but beings that fail the TT (children, animals, etc.) could still be intelligent because you can be intelligent for different reasons

closest continuer theory

The idea that whatever identity is closest to the original identity is numerically identical to the original identity. 1. p1 is psychologically continuous with p2 2. p1 is not equally or more continuous with any other person

intentional stance

The tendency to explain or predict the behavior of others using intentional states (e.g. wanting, liking) by talking as if mental states exist we highlight real patterns in human behavior intentional mental states dont exist intrinsically before we interpret them A person P is said to have beliefs, desires etc. only if someone interprets P's behavior from the intentional stance, mental states have derived rather than original intentionality

Google effect

The tendency to forget information that can be found readily online by using Internet search engines. optimism - computers reduce our brain bound beliefs but increase our extended beliefs, learning new strategies to couple with external objects

Black box objection (Block)

Turing test makes intelligence a "black box", anything with sensible verbal behavior is intelligent and inner workings of the box don't matter

type and token identity theories

Type identity theory - every type of mental state (ex. pain) = a type of brain state (ex. cortico thalamic oscillation) - type identity theory implies token identity theory, not vice versa EX - say every token colored thing = token shaped thing that wouldn't necessarily imply that every type of color has a type of shape but, type would imply token Token identity theory - every token mental state (ex. Zach's specific pain) = a token brain state (ex. the brain state in zach's brain)

types and tokens

Types - categories of things - 1 type of dog Tokens - instances of that category - 2 token dogs of that type

Validity and Soundness

valid - structurally sound argument = if the premises are true the conclusion MUST be true as well (doesnt require premises to be true) sound - requires that the premises are all actually true and the argument is valid


Kaugnay na mga set ng pag-aaral

Comptia 1101 220 (Display Devices)

View Set

Identifying and Applying Patterns of Development

View Set

Економічна кібернетика

View Set

4. Fisiología de la erección: Drenaje venoso

View Set