Phil 3 Final

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

Disjunction

False if and only if both statements are false; otherwise it is true

Method for adjudicating between competing explanations

-State the theory and check for consistency -Assess the evidence for the theory -Formulate and scrutinize alternative theories -Compare the theories using the explanatory criteria of adequacy Consider at least 2 explanations when forming an inference Case study 1: The Long Island Medium Explanandum: Theresa Caputo appears to talk to audience members' deceased loved ones and seem to know a great deal of information about them

Criteria for evaluating analogical arguments

1. Relevant similarities (How many shared properties are there that actually affect the likelihood of the conclusion?) 2. Relevant dissimilarities (How many dissimilarities are there that could make the conclusion less likely to be true? 3. Number of instances compared (How many things are being compared?) 4.. Diversity among cases (How much diversity is there among the cases that exhibit the relevant similarities?) Example: 1. Puppies and house cats are intelligent, (relatively) social, domesticated, sentient mammals. Moreover, it is wrong to butcher puppies and house cats for food. 2. Pigs are intelligent (relatively) social, domesticated, sentient mammals 3. So, it is wrong to butcher pigs for food Similarities: Intelligent, sociable, domesticated, sentient, mammals, curious, sometimes live on farms Dissimilarities: Kept as pets Furry Cute Smell Historically thought of as food Emotionally attuned

Proposition

A proposition is the information content imparted by a statement (or its meaning)

Cogency

An inductive argument is cogent if it is strong and the premises are all true; otherwise, it is uncogent

Inductive argument

An inductive argument is one in which the premises are intended to provide probable but not logically conclusive support for the conclusion. Premises give us reason to think that the conclusion is more likely than not to be true. Premises could be true while conclusion could fail to be true, even in a perfect inductive argument.

Strength

An inductive argument is strong if the premises successfully provide probable logical support for the conclusion; otherwise, it is weak. Unlike validity, strength can come in degrees, i.e. probability of conclusion is 55% vs. 99%

Belief

A belief is a mental acceptance of a proposition

Causal arguments

A causal argument is an inductive argument whose conclusion contains a causal claim Enumerative Causal Argument example: -Two days ago, sniffing a flower caused me to sneeze. -Yesterday, sniffing a flower caused me to sneeze. -Today, sniffing a flower caused me to sneeze. -So, sniffing flowers causes me to sneeze Analogical Cause Argument: -Tickling Jake's arm and tickling Jake's knee both cause him to laugh -Tickling Jake's foot is similar to tickling Jake's arm and tickling Jake's knee -So, tickling Jake's foot will cause him to laugh Type vs. token event: Type: multiple particular events of the same type - he threw the book on the floor twice Token: particular events Causal pitfalls: -Misidentifying relevant factors -Mishandling multiple factors -Being misled by coincidence -Confusing cause with temporal order (post hoc fallacy) -Confusing cause and effect -Ignoring common cause explanations

Compound statement

A compound statement is a statement that has at least one logical operator as components

Contingent

A contingent statement is one that is neither necessarily true nor necessarily false (and so has at least one T and at least one F under its main operator in a truth table)

Deductive argument

A deductive argument is one in which the premises are intended to provide logically conclusive support for the conclusion. It's logically impossible for premises to be true and the conclusion to be false.

Soundness

A deductive argument is sound if it is valid and all the premises are true; otherwise, it is unsound

Validity

A deductive argument is valid if the premises successfully provide decisive logical support for the conclusion; otherwise, it is invalid

Inclusive disjunction

A disjunction where at least one of the disjuncts is true, possibly both

Exclusive disjunction

A disjunction where the author is intending to say that one or the other of the disjuncts is true, but not both

Necessary condition

A necessary condition occurs whenever one thing is required for another thing to be realized

Red herring fallacy

A red herring fallacy occurs when someone ignores an opponent's position and changes the subject, diverting the discussion in a new direction. It differs from strawman because they don't create a new argument, just change the subject. -"Sarah: I believe that my opponent is not fit for office because he has no previous political experience." -"Harry: I am, in fact, fit for politics. And we have to do something about the rampant corruption we see within government. I hate corruption, and I'll do my best to expose it." Instead of saying why he rejects her claim, he changes the subject to a sensational issue. Abrupt shift in topic from inexperience to corruption

Self-contradiction

A self-contradiction is a statement that is necessarily false (and so has all F's under its main operator in a truth table). Can't be true, must be false

Statement

A sentence that is either true or false. Any command, question, or exclamations aren't statements

Simple statement

A simple statement is one that does not have any other statement or logical operator as a component

Slipper slope fallacy

A slippery slope fallacy occurs when an argument links events to suggest an alleged, but unsupported, chain reaction with an undesirable conclusion -If gay marriage is legalized, soon anyone will be able to marry anything! Before long, people will be marrying animals, and then objects. The whole institution of marriage will collapse! -Once you start smoking weed, you're on the path to hard drugs. And once you start hard drugs, you'll become addicted and ruin your life. Slippery slopes are not fallacious when every link in the chain is supported by evidence: -Habitual smoking tends to cause lung cancer. Lung cancer tends to cause death. So habitual smoking tends to cause death

Straw man fallacy

A straw man fallacy occurs when an objector misrepresents an argument in order to construct a new, weaker argument that can be easily refuted -"Sarah: there's nothing inherently wrong with GMO's. Evidence suggests that some GMOs are actually healthier than non-GMO counterparts." -"Henry: Sarah is a loon! According to her, we should only eat GMOs. That would be extremely dangerous, since many GMOs are untested!" Claim that we should ONLY eat GMO's, in doing this Henry makes Sarah's claim easier to refute by attributing this implausible claim to her Can be directed at groups or institutions: -Democrats think everyone is entitled to free healthcare -Republicans say the rich shouldn't be taxed

Sufficient condition

A sufficient condition occurs whenever one thing ensures that another thing is realized

Tautology

A tautology is a statement that is necessarily true (and so has all T's under its main operator in a truth table). Must be true, can't be false

Truth table

A truth table is an arrangement of truth values for a truth-functional compound proposition that displays for every possible case how the truth value of the compound proposition is determined by the truth values of its simpler components. Used to determined validity of deductive arguments

Ad hominem fallacy

Ad Hominem fallacies occur when an argument is rejected based on an inappropriate attack against the person presenting the argument General structure: -Person x presents an argument or is about to -Person y attacks the characteristics or circumstances of x -Based on the attack against x, y rejects x's arguments (and doesn't accept argument based on person or group presenting it rather than argument itself) 3 versions: Ad hominem abusive: an attack on the characteristics of the proponent of an argument rather than the argument itself (when the proponent characteristics are irrelevant to the strength of the argument) -"She thinks people use their phones way too much these days, but she's just old and out of touch" -"He keeps telling me I shouldn't eat factory farmed meat because its bad for the environment and the animals. He's a real a**hole." -"Bad performance by crooked Hillary Clinton! Reading poorly from the teleprompter! She doesn't even look presidential!" Ad hominem circumstantial: the rejection of a proponent's argument based on their circumstances, including political affiliation, educational institution, place of birth, religion, income/class, race, sex/gender, career, the way their opponent is related to their environment as a way of rejecting their argument -"He argue that you shouldn't have sex until you're married. That's nonsense, he only says that because he grew up in a conservative household." -"She insists that everyone should use gender-neutral pronouns in writing. She's merely caught in the grips of a liberal ideology." -"Ratings starved cnn and cnn politics does not cover me accurately. Why can't they get it right - its really not that hard!" Ad hominem tu quoque (you, too!): attempting to dismiss an argument by claiming that the other person is a hypocrite -"You say I shouldn't cheat on my exams because its unfair to other students. But you cheated on your calculus test last year!" Legitimate attacks: Not all attacks are fallacious -"He's a pathological liar. Be incredulous about what he says!" - "She's being paid by tobacco lobbies to produce arguments that discredit public health officials. She has external incentives to lie."

Post-hoc fallacy

After the fact, therefore because of the fact. Just because one event precedes another, that isn't enough to justify a causal connection. -Anti-vaxxer's believe that because after more vaccination, more autism came, that means vaccines cause autism

Conditional

Always true unless the antecedent is true and the consequent is false

Abductive arguments (inference to the best explanation)

An abductive argument moves from premises about events or states of affairs to an explanation for those events or states of affairs Schema: Q is the case E provides the best explanation for Q Probably, E is true. The sidewalks were wet this morning. The best explanation is that it rained last night. So, it rained last night. Explanandum: The thing being explained Explanans: The claims doing the explaining, basically the reason Inference to best explanation: Explanandum: your roommate did not come home last night Potential explanations: They're: -being held ransom by terrorists -in the hospital -died -moved without telling you -playing hide and seek -spent the night at someone else's house (bingo) Internally consistent theories are free of contradictions Externally consistent theories are consistent with, and fully account for, the relevant explanandum

Argument

An argument is a group of statements in which some of the statements (the premises) are intended to rationally support another of the statements (the conclusion)

Enthymeme

An enthymeme is an argument that has an important implicit premise or an implicit conclusion. Example: Prostitution is deeply immoral, therefore, prostitution should be illegal. Possible premises: All deeply immoral things should be illegal Step 1. Search for a credible premise that would make the argument valid, one that would furnish the needed link between premise (or premises) and conclusion. Choose the supplied premise that a. is most plausible and b. fits best with the author's intent. The first stipulation (a) means that you should look for premises that are either true or likely to be thought true. The second stipulation (b) means that premises should fit—that is, at least not conflict—with what seems to be the author's point or purpose (which, of course, is sometimes difficult to discern). If the premise you supply is plausible and fitting (with author's intent), use it to fill out the argument. If your supplied premise is either not plausible or not fitting, go to step 2. Step 2. Search for a credible premise that would make the argument as strong as possible. Choose the supplied premise that fulfills stipulations a and b. If the premise you supply is plausible and fitting, use it to fill out the argument. If your supplied premise is either not plausible or not fitting, consider the argument beyond repair and reject it. Step 3. Evaluate the reconstituted argument. If you're able to identify a credible implicit premise that makes the argument either valid or strong, assess this revised version of the argument, paying particular attention to the plausibility of the other premise or premises.

Analogical arguments

Analogical reasoning/arguments: Occur when an arguer makes a comparison between 2 or more like things An argument by analogy occurs when an analogy is used in an inductive argument Schema: 1. Thing A has such and such properties plus the property P 2. Thing B also has such and such properties 3. So, B has property P Examples: Canada and the U.S. are large, industrialized, North American nations most of whose citizens speak English. Mexico is a large, industrialized, North American nation So, most Mexican citizens speak English Taco Bell quickly serves tasty, fatty junk food to its customers and is busy at night In n out quickly serves tasty, fatty junk food to its customers Probably, in n out is busy at night

Appeal to unqualified authority

Anytime someone inappropriately appeals to epistemic authority: -If I were to appeal to Albert Einstein's views about the best movie in the 30's in order to justify my claim that it's the best movie. On the basis of Einstein's recommendation alone, you can't conclude that this is the best movie.

Truth-functional propositions

Compound statements (in PL) are truth-functional propositions which means that their truth value is wholly determined by truth values of their component statements and by their logical operators. Truth functionality is a result of the fact that logical operators are defined on how they produce truth values with inputs

Relevant dissimilarities

Generally, the more relevant dissimilarities, or disanalogies, there are between the things being compared, the less probable the conclusion. Dissimilarities weaken arguments by analogy. Consider argument 1 (regarding Drug Z). Mice are mammals, have a mammalian circulatory system, have typical mammalian biochemical reactions, respond readily to high blood pressure drugs, and experience a reduction in blood cholesterol when given the new Drug Z. Humans are mammals, have a mammalian circulatory system, have typical mammalian biochemical reactions, and respond readily to high blood pressure drugs. Therefore, humans will also experience a reduction in blood cholesterol when given the new Drug Z. What if we discover that cholesterol-lowering drugs that work in mice almost never work in humans? This one dissimilarity would severely weaken the argument and make the conclusion much less probable. Pointing out dissimilarities in an analogical induction is a common way to undermine the argument. Sometimes finding one relevant dissimilarity is enough to show that the argument should be rejected. A watch is a mechanism of exquisite complexity with numerous parts precisely arranged and accurately adjusted to achieve a purpose—a purpose imposed by the watch's designer. Likewise the universe has exquisite complexity with countless parts—from atoms to asteroids—that fit together precisely and accurately to produce certain effects as though arranged by plan. Therefore, the universe must also have a designer. A familiar response to argument 2 (the watch argument) is to point out a crucial dissimilarity between a watch and the universe: The universe may resemble a watch (or mechanism) in some ways, but it also resembles a living thing, which a watch does not.

Group-centered thinking

Harder to think about beliefs of a group to which you belong than when you don't. It's very difficult to think critically about a group that you're in or exerts power over you

Homogeneity and heterogeineity

Homogenous: A target group is homogenous relative to some property P whenever there is not much variation within the group with respect to P, such as mass produced items, coke bottles, textbooks, etc. Heterogeneous: A target group is heterogenous relative to some property P whenever there is a lot of variation within the group with respect to P (all living beings on earth) (to make an inference, you need a pretty large sample)

Material conditional

If antecedent is false you know immediately that the conditional is true. Define an operator that takes 2 things as input and creates a compound statement that is false if and only if the antecedent is true and the consequent is false. In every other condition the compound statement will be true. All this is really saying is that the antecedent is sufficient for the consequent and the consequent is necessary for the antecedent.

Steps for classifying an argument

If the argument would be valid if it were deductive, then interpet it as deductive. If the argument would be strong if it were inductive, then interpret it as inductive If an argument looks to be deductive or inductive, assume it is intended to be so

How to prove validity with truth tables

If there is any row in the truth table where the premises are true and the conclusion is false, then it is invalid. If there is no row like that, it is valid.

Indicators and rules of epistemic authority

Indicators: -Education/training from reputable institutions in the relevant field -Experience making reliable judgements in the relevant field -Good reputation amongst peers -Many professional accomplishments Rules of Thumb: -If most experts accept a claim, we have good reason to accept it. -If most experts reject a claim, we have good reason to reject it. -If experts disagree about a claim, we have good reasons to withhold judgement -Not every issue should be settled by experts Some warning signs: -Expert stands to gain from claims being made -Expert makes simple factual or formal errors -Expert's claims conflict with what you have good reason to believe -Expert does not adequately support assertions (even when asked to) -Expert does not treat opposing views fairly -Expert is strongly emotional or dismissive

Distinguishing between inductive and deductive arguments

Is it the case that if the premises are true the conclusion must be true? If the answer is yes, treat the argument as deductive, for it is very likely meant to offer conclusive support for its conclusion. The argument, then, is deductively valid, and you should check to see if it's sound. If the answer is no, proceed to the next step Is it the case that if the premises are true, its conclusion is very probably true? If the answer is yes, treat the argument as inductive, for it is very likely meant to offer very probable support for its conclusion. The argument, then, is inductively strong, and you should check to see if it's cogent. If the answer is no, proceed to the next step. Generally if an argument looks deductive or inductive because of its form, assume that it is intended to be so. Generally if an argument looks deductive or inductive because of indicator words (and its form yields no clues), assume that it is intended to be so. Terms that signal a deductive argument include "It necessarily follows that," "it logically follows that," "absolutely," "necessarily," and "certainly." Words signaling an inductive argument include "likely," "probably," "chances are," "odds are," and "it is plausible that." Such indicator words, though, are not foolproof clues to argument type because they are sometimes used in misleading ways. For example, someone might end an inductively strong argument with a conclusion prefaced with "it necessarily follows that," suggesting that the argument is deductively valid.

Relationship between margin of error and confidence level

Margin of error: A margin of error reports a range of values within which the value reflecting the actual distribution of the relevant property in the target group is claimed to fall (conclusion has margin of error) Confidence level: A confidence level reports the probability that the sample will accurately represent the target group within the margin of error (degree to which premises predict conclusion) The larger the margin of error, the higher the confidence level. The smaller the margin of error, the lower the confidence level

Simplicity

Other things being equal, the best theory is the one that is the simplest—that is, the one that makes the fewest assumptions. The theory making the fewest assumptions is less likely to be false because there are fewer ways for it to go wrong. Another way to look at it is that since a simpler theory is based on fewer assumptions, less evidence is required to support it. The criterion of simplicity has often been a major factor in the acceptance or rejection of important theories. For example, simplicity is an important advantage that the theory of evolution has over creationism, the theory that the world was created at once by a divine being (see chapter 9). Creationism must assume the existence of a creator and the existence of unknown forces (supernatural forces used by the creator). But evolution does not make either of these assumptions. Scientists eventually accepted Copernicus's theory of planetary motion (heliocentric orbits) over Ptolemy's (Earth-centered orbits) because the former was simpler. In order to account for apparent irregularities in the movement of certain planets, Ptolemy's theory had to assume that planets have extremely complex orbits (orbits within orbits). Copernicus's theory, however, had no need for so much extra baggage. His theory could account for the observational data without so many orbits-within-orbits.

Fruitfulness

Other things being equal, theories that perform this way—that successfully predict previously unknown phenomena—are more credible than those that don't. They are said to be fruitful, to yield new insights that can open up whole new areas of research and discovery. This fruitfulness suggests that the theories are more likely to be true. If a friend of yours is walking through a forest where she has never been before, yet she seems to be able to predict exactly what's up ahead, you would probably conclude that she possessed some kind of accurate information about the forest, such as a map. Likewise, if a theory successfully predicts some surprising state of affairs, you are likely to think that the predictions are not just lucky guesses. All empirical theories are testable (they predict something beyond the thing to be explained). But fruitful theories are testable and then some. They not only predict something, they predict something that no one expected. The element of surprise is hard to ignore.

Enumerative arguments and their constituents

Part-to-whole Schema: 1. X percent of the observed members of group A have property P 2. Probably, X percent of group A have property P Almost all UCSB TA's I've met enjoy talking to their students. Probably, almost all UCSB TAs enjoy talkking to their students Part of group --> claim about whole group Target group: the whole collection of individuals in question, group that conclusion is making a claim about Sample: The observed members of the target group Relevant property: the property we're interested in 2 things needed: Adequate sample size Representativeness of sample size (needs to have one large enough for target group and resemble target group for everything relevant)

Skepticism

Position that truths within some domain are unknowable. If someone thought nothing was knowable and believed that no beliefs were better than others then we would have no reason to think critically. But sometimes people are led to skepticism by critical thinking. Skepticism relies on the idea that knowledge requires absolute certainty

Distinction between epistemic and practical authority

Practical authority: Someone has practical authority whenever they have the power to command others to do things Epistemic authority: Someone has epistemic authority (relative to some domain) whenever they possess specialized knowledge, or techniques for gaining knowledge, about some domain

Subjective realism

There are objectively better and objectively worse ways of thinking. The truth in some domain are relativized to individuals. The most extreme version: A persons belief are always true for them. If you think you can't be wrong, there's no need for critical thinking

Selection biases

Selection bias and self-selection bias: You can only get responses from people that have something to say and this can sway results radically. Also people incentivize taking polls with rewards, people that want rewards will have a certain property. People aren't reliable sources about themselves even if they intend to be. Question phrasing; Do you like to drink the stuff that is squirted out of bovine's mammary glands? vs. Do you like America's favorite drink, milk? Question ordering: (1)Do you like to drink prune juice? (2)Do you like to drink milk? Restricted choices: What do you think of milk? (select one) (a) I love milk (b) I like milk (c) Milk is my least favorite drink Can prime certain idea into participants' minds and affect answers to later questions

Psychological obstacles

Self-centered thinking Group-centered thinking Resisting contrary evidence Looking for confirming evidence Preferring available evidence

Reconstructing arguments from passages

Step 1. Study the text until you thoroughly understand it. You can't locate the conclusion or premises until you know what you're looking for—and that requires having a clear idea of what the author is driving at. Don't attempt to find the conclusion or premises until you "get it." This understanding entails having an overview of a great deal of text, a bird's-eye view of the whole work. Step 2. Find the conclusion. When you evaluate extended arguments, your first task, as in shorter writings, is to find the conclusion. There may be several main conclusions or one primary conclusion with several subconclusions. Or the conclusion may be nowhere explicitly stated but embodied in metaphorical language or implied by large tracts of prose. In any case, your job is to come up with a single conclusion statement for each conclusion—even if you have to paraphrase large sections of text to do it. Step 3. Identify the premises. Like the hunt for a conclusion, unearthing the premises may involve condensing large sections of text into manageable form—namely, single premise statements. To do this, you need to disregard extraneous material and keep your eye on the "big picture." Just as in shorter arguments, premises in longer pieces may be implicit. At this stage you shouldn't try to incorporate the details of evidence into the premises, though you must take them into account to fully understand the argument. examples: Contemporary debates about torture usually concern its use in getting information from suspects (often suspected terrorists) regarding future attacks, the identity of the suspects' associates, the operations of terrorist cells, and the like. How effective torture is for this purpose is in dispute, mostly because of a lack of scientific evidence on the question. We are left with a lot of anecdotal accounts, some of which suggest that torture works, and some that it doesn't. People who are tortured often lie, saying anything that will make the torturers stop. On the other hand, in a few instances torture seems to have gleaned from the tortured some intelligence that helped thwart a terrorist attack. Is torture sometimes the right thing to do? The answer is yes: in rare situations torture is indeed justified. Sometimes torturing a terrorist is the only way to prevent the deaths of hundreds or thousands of people. Consider: In Washington, D.C. a terrorist has planted a bomb set to detonate soon and kill a half million people. FBI agents capture him and realize that the only way to disarm the bomb in time is for the terrorist to tell them where it is, and the only way to get him to talk is to torture him. Is it morally permissible then to stick needles under his fingernails or waterboard him? The consequences of not torturing the terrorist would be a thousand times worse than torturing him. And according to many plausible moral theories, the action resulting in the best consequences for all concerned is the morally correct action. When we weigh the temporary agony of a terrorist against the deaths of thousands of innocents, the ethical answer seems obvious. (1) In a ticking-bomb scenario, the consequences of not torturing a terrorist would be far worse than those of torturing him. (2) The morally right action is the one that results in the best consequences for all concerned. (3) Therefore, in rare situations torture is morally justified. Edgardo Cureg was about to catch a Continental Airlines flight home on New Year's Eve when he ran into a former professor of his. Cureg lent the professor his cell phone and, once on board, went to the professor's seat to retrieve it. Another passenger saw the two "brown-skinned men" (Cureg is of Filipino descent, the professor Sri Lankan) conferring and became alarmed that they, and another man, were "behaving suspiciously." The three men were taken off the plane and forced to get later flights. The incident is now the subject of a lawsuit by the ACLU. Several features of Cureg's story are worth noting. First, he was treated unfairly, in that he was embarrassed and inconvenienced because he was wrongly suspected of being a terrorist. Second, he was not treated unfairly, because he was not wrongly suspected. A fellow passenger, taking account of his apparent ethnicity, his sex and age, and his behavior, could reasonably come to the conclusion that he was suspicious. Third, passengers' anxieties, and their inclination to take security matters into their own hands, increase when they have good reason to worry that the authorities are not taking all reasonable steps to look into suspicious characters themselves. . . . Racial profiling of passengers at check-in is not a panacea. John Walker Lindh could have a ticket; a weapon could be planted on an unwitting 73-year-old nun. But profiling is a way of allocating sufficiently the resources devoted to security. A security system has to, yes, discriminate—among levels of threat. (1) Racial profiling is a reasonable response in light of our legitimate concerns about security. (2) Profiling is a way of allocating sufficiently the resources devoted to security. (3) Therefore, discrimination by racial profiling is a justified security measure.

Philosophical obstacles

Subjective realism Social relativism Skepticism

Scope

Suppose theory 1 and theory 2 are two equally plausible theories to explain phenomenon X. Theory 1 can explain X well, and so can theory 2. But theory 1 can explain or predict only X, whereas theory 2 can explain or predict X—as well as phenomena Y and Z. Which is the better theory? We must conclude that theory 2 is better because it explains more diverse phenomena. That is, it has more scope than the other theory. The more a theory explains or predicts, the more it extends our understanding. And the more a theory explains or predicts, the less likely it is to be false because it has more evidence in its favor. One kind of case that investigators sometimes explain as an instance of constructive perception is the UFO sighting. Many times people report seeing lights in the night sky that look to them like alien spacecraft, and they explain their perception by saying that the lights were caused by alien spacecraft. So we have two theories to explain the experience: constructive perception and UFOs from space. The constructive-perception theory can explain not only UFO sightings but all kinds of ordinary and extraordinary experiences—hallucinations, feelings of an unknown "presence," misidentification of crime suspects, contradictory reports in car accidents, and more. The UFO theory, however, is (usually) designed to explain just one thing: an experience of seeing strange lights in the sky.

Criteria of explanatory adequacy

Testability - is there some way to test whether the theory is true or false? Does it predict anything besides what it purports to explain? Fruitfulness - to what extent does the theory successfully predict previously unknown phenomena? Scope - to what extent does it explain diverse phenomena? Simplicity - what sorts of assumptions does the theory make? Is any part of it ad hoc? Conservatism - to what extent does the theory fit with our well-established beliefs?

Testimony

Testimony occurs whenever someone tells another person that something is the case without providing an argument Basic principles: -One should proportion one's beliefs to the evidence available to one (including testimonial evidence) -Its not reasonable to believe a claim when there is no good reason for doing so -If a claim conflicts with other claims we have a good reason to accept, we have good grounds for doubting it -If a claim conflicts with our background beliefs, we have good reason to doubt it

The Method of Concomitant Variation

The Method of Concomitant Variation: the likely cause is among those relevant factors that are closely correlated with a phenomenon This method says that when two events are correlated—when one varies in close connection with the other—they are probably causally related. Relevant factor - a factor that given our beliefs about causal relationships, could possibly be a cause of the occurrence in question if its situation in a certain way We can only use this method to evaluate cause of event types not tokens - you can look at particular event tokens to make an inference about event type We need to look for what relevant factors or events are correlated with the event type in question Idea is that probably the cause is going to be correlated with the event type in question Not just correlation vs causation - only looking for correlation between relevant factors

Composition fallacy

The composition fallacy occurs when an argument mistakenly transfers an attribute of the individual parts of an object to the object as a whole (or mistakenly transfers an attribute of the members of a class to the class itself) -"The bricks in the building weigh 3 pounds. So the building weighs 3 pounds." -"A single nuclear bomb causes more deaths than a single cigarette. So, nuclear bombs cause more deaths than cigarettes." -"All the players on the basketball team are very good. So, the basketball team is very good." Not all translations from part-whole are fallacious: -"All the bricks in the house weigh more than 3 pounds, so the house weighs more than 3 pounds." -"A star is brighter than a candle, and there are many more stars than candles. So, stars produce more light than candles."

Division fallacy

The division fallacy occurs when an argument mistakenly transfers an attribute of a whole object to the individual parts of the object (or mistakenly transfers an attribute of a class to the individual members) -"The house weighs 160k pounds. So, each brick weighs 160k pounds." -"Cigarettes cause more deaths than nuclear bombs. So a single cigarette is more dangerous than a single nuclear bomb." Not all transitions from whole to part are fallacious: -"The car is red. So, the front half of the car is red." -"Corvettes are generally the fastest cars on the road. So individual corvettes are fast cars."

Diversity among cases

The greater the diversity among the cases that exhibit the relevant similarities, the stronger the argument. Take a look at this argument: (1) In the 1990s a US senator, a Republican from Virginia, was chairman of the commerce committee, had very close ties to Corporation X, had previously worked for Corporation X before coming to office, and was found to have been taking bribes from Corporation X. (2) In the 1980s another US senator, a Democrat from Texas, was chairman of the commerce committee, had very close ties to Corporation X, had previously worked for Corporation X before coming to office, and was found to have been taking bribes from Corporation X. (3) In the 1970s another US senator, an Independent from Arkansas with strong religious values, was chairman of the commerce committee, had very close ties to Corporation X, had previously worked for Corporation X before coming to office, and was found to have been taking bribes from Corporation X. (4) Now the newly elected Senator Jones, a Democrat from New York with strong support from labor unions, is chairman of the commerce committee, has very close ties to Corporation X, and has previously worked for Corporation X before coming to office. (5) Therefore, Senator Jones will take bribes from Corporation X. Here we have several similarities in question, and they exist between the Senator Jones situation (described in premise 4) and three other cases (detailed in premises 1-3). But what makes this argument especially strong is that the cases are diverse despite the handful of similarities—one case involves a Republican senator from Virginia; another, a Democratic senator from Texas; and finally a religious Independent senator from Arkansas. This state of affairs suggests that the similarities are not accidental or contrived but are strongly linked even in a variety of situations.

Number of instances compared

The greater the number of instances, or cases, that show the relevant similarities, the stronger the argument. In the war argument, for example, there is only one instance that has all the relevant similarities: the Vietnam War. But what if there were five additional instances—five different wars that have the relevant similarities to the present war? The argument would be strengthened. The Vietnam War, though it is relevantly similar to the present war, may be an anomaly, a war with a unique set of properties. But citing other cases that are relevantly similar to the present war shows that the relevant set of similarities is no fluke.

Main operators

The main operator of a WFF is the logical operator that has the entire WFF in its scope. Not every WFF has an operator

The Method of Agreement and Difference

The method of agreement and difference: the likely cause is among those factors isolated when you (i) identify the relevant factors common to occurrences of a phenomenon (ii) discard any of those factors that are present even when there are no occurrences We can only use this method to evaluate cause of event types not tokens - you can look at particular event tokens to make an inference about event type If there's a single cause, then that cause is going to be present if and only if the event occurs What factors could be relevant? Of those, which are both present when the event occurs and absent when the event does not occur? Let's apply this combined method to the mystery illness at Elmo's bar. Say that among the ten patrons who become ill, the common factors are that they all drank from the same bottle of wine, and they all had the free tacos. So we reason that the likely cause is either the wine or the tacos. After further investigation, though, we find that other patrons who ate the tacos did not become ill. We conclude that the wine is the culprit.

Relevant similarities

The more relevant similarities there are between the things being compared, the more probable the conclusion. A similarity (or dissimilarity) is relevant to an argument by analogy if it has an effect on whether the conclusion is probably true. Consider this argument: In the Vietnam War, the United States had not articulated a clear rationale for fighting there, and the United States lost. Likewise, in the present war the United States has not articulated a clear rationale for fighting. Therefore, the United States will lose this war, too. There is just one relevant similarity noted here (the lack of rationale). As it stands, this argument is weak; the two wars are only dimly analogous. A single similarity between two wars in different eras is not enough to strongly support the conclusion. But watch what happens if we add more similarities: In the Vietnam War, the United States had not articulated a clear rationale for fighting, there was no plan for ending the involvement of US forces (no exit strategy), US military tactics were inconsistent, and the military's view of enemy strength was unrealistic. The United States lost the Vietnam War. Likewise, in the present war, the United States has not articulated a clear rationale for fighting, there is no exit strategy, US tactics are inconsistent, and the military's view of enemy strength is naive. Therefore, the United States will also lose this war. With these additional similarities between the Vietnam War and the current conflict, the argument is considerably stronger.

Social relativism

The truth in some domain is relativized to the beliefs of some social group. If my society believes the sky's blue, it's blue, if another says it's red, it's red. Moral issues are relativized to cultures.

Self-centered thinking

Theories aligned with your own beliefs. Thinking things because it benefits you rather than because you believe it

Conservatism

This criterion says that other things being equal, the best theory is the one that fits best with our well-established beliefs—that is, with beliefs backed by excellent evidence or very good arguments. What kind of beliefs fall into the category of "well-established" knowledge? For starters, we can count beliefs based on our own everyday observations that we have no good reasons to doubt (such as "It's raining outside," "The parking lot is empty," and "The train is running late today"). We can include basic facts about the world drawn from excellent authority ("The Earth is round," "Men have walked on the moon," and "Cairo is the capital of Egypt"). And we can include a vast array of beliefs solidly supported by scientific evidence, facts recognized as such by most scientists ("Cigarettes cause lung cancer," "Vaccines prevent disease," "Dinosaurs existed," and "Germs cause infection"). Many of our beliefs, however, cannot be regarded as well established. Among these, of course, are all those we have good reasons to doubt. But there is also a large assortment of beliefs that occupy the middle ground between those we doubt and those we have excellent reasons to believe. We may have some reasons in favor of these beliefs, but those reasons are not so strong that we can regard the beliefs as solid facts. We can only proportion our belief to the evidence and be open to the possibility that we may be wrong. Very often such claims reside in areas that are marked by controversy—politics, religion, ethics, economics, and more. Among these notions, we must walk cautiously, avoid dogmatism, and follow the evidence as best we can. We should not assume that the claims we have absorbed from our upbringing and culture are beyond question.

Conjunction

True when both of its combined parts are true; otherwise it is false

Biconditional

True when both values are the same; otherwise it is false

Contradiction

Two contradictory statements are such that each is true under a truth value assignment if and only if the other is false under that assignment (and so they have opposite truth values under the main operator in their combined truth table)

Logical equivalency

Two logically equivalent statements are such that each is true under a truth value assignment if and only if the other is true (and so they have identical truth values under the main operator in their combined truth table)

Inconsistency

Two or more statements that are inconsistent cannot all be true at once (and so there is not a single row on their combined truth table with T below all the main operators). If there's no row where they're all true, it's inconsistent.

Testability

We often run into untestable theories in daily life, just as scientists sometimes encounter them in their work. Many practitioners of alternative medicine claim that health problems are caused by an imbalance in people's chi, an unmeasurable form of mystical energy that is said to flow through everyone. Some people say that their misfortunes are caused by God or the Devil. Others believe that certain events in their lives happen (and are inevitable) because of fate. And parents may hear their young daughter say that she did not break the lamp, but her invisible friend did. A theory is testable if it predicts something other than what it was introduced to explain. Suppose your electric clock stops each time you touch it. One theory to explain this event is that there is an electrical short in the clock's wiring. Another theory is that an invisible, undetectable demon causes the clock to stop. The wiring theory predicts that if the wiring is repaired, the clock will no longer shut off when touched. So it is testable—there is something that the theory predicts other than the obvious fact that the clock will stop when you touch it. But the demon theory makes no predictions about anything, except the obvious, the very fact that the theory was introduced to explain. It predicts that the clock will stop if you touch it, but we already know this. So our understanding is not increased, and the demon theory is untestable. Now, if the demon theory says that the demon can be detected with x-rays, then there is something the theory predicts other than the clock's stopping when touched. You can x-ray the clock and examine the film for demon silhouettes. If the theory says that the demon can't be seen but can be heard with sensitive sound equipment, then you have a prediction, something to look for other than clock stoppage.

Preferring available evidence

We prefer evidence that is more readily available to us. Some evidence is naturally more salient to us, we're more likely to notice it and remember.

Looking for confirming evidence

We seek out evidence that confirms our belief and avoid evidence that contradicts it

Resisting contrary evidence

When we have evidence that contradicts what we believe, we're much more likely to scrutinize it than evidence that supports our beliefs. It's difficult to raise objection to your own views


Set pelajaran terkait

Chapter 6 - Violence in Sports - TF

View Set

Python If, Elif, while, for, break, continue, pass)

View Set

Perguntas iguais! Chapter 4 and 3 end of chapter review & lecture notes

View Set

CH 4: Completing the Accounting Cycle

View Set

Foundations Chapter 9 Cultural Awareness

View Set

Chapter 7: Optimal Risky Portfolios (Review Questions)

View Set