CSC 529 (Intro to AI)

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

Computers in Turing's day

"computers" in Turing's day were people who followed well-defined steps and computers as we know them today did not exist

Prolog

(declarative, non-imperative, logic-programming language): Program is set of facts and rules in language similar to FOPC (first-order predicate calculus, CSC350).

LISP

(functional, non-imperative language): Designed for manipulating lists of symbols, e.g. (car (cdr (cdr L)))

what are the 5 requirements for AI knowledge representation?

1) easy to express knowledge needed to solve the problem 2) easy to represent a problem in it 3) easy to interpret output as solution 4) efficient to compute solution in it 5) can be acquired from people or past experience

What are the general ethical principles of the ACM Software Engineering Code of Ethics?

1. GENERAL ETHICAL PRINCIPLES. 1.1 Contribute to society and to human well-being, acknowledging that all people are stakeholders in computing. 1.2 Avoid harm. 1.3 Be honest and trustworthy. 1.4 Be fair and take action not to discriminate. 1.5 Respect the work required to produce new ideas, inventions, creative works, and computing artifacts. 1.6 Respect privacy. 1.7 Honor confidentiality.

Asilomar principles

6) Safety 7) Failure Transparency 8) Judicial Transparency 9) Responsibility 10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation. 11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity. 12) Personal Privacy 13) Liberty and Privacy 14) Shared Benefit: AI technologies should benefit and empower as many people as possible. 15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity. 16) Human Control 17) Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends. 18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided. 19) Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities. 20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. 21) Risks

What is an organization?

A class of intelligent agent. An ant colony or a human company would be an example. Human society is one, and it is arguable the most intelligent agent out there.

Computational agent

A computational agent is an agent whose decisions about its actions can be explained in terms of computation.

what are some ethical codes for computer professionals?

ACM Software Engineering Code of Ethics Asilomar AI principles

We judge an agent by its...

Actions

Agent

An agent is something that acts in an environment - it does something. Agents include worms, dogs, thermostats, airplanes, robots, humans, companies, and countries.

World

An agent together with its environment is a world

ontology

An ontology is a specification of the meaning of the symbols used in an information system. It specifies what is being modeled and the vocabulary used in the system.

Optimal solution

An optimal solution to a problem is one that is the best solution according to some measure of solution quality.

Church-Turing thesis

Any effectively computable function can be carried out on a Turing machine (and so also in the lambda calculus or any of the other equivalent formalisms).

Analytical machine

Babbage designed it somewhere between 1792 and 1871. It was the first general-purpose computer.

deontology

Basic idea that ethical actions come from following moral rules and doing one's duty Roots in ancient cultures and religions Immanuel Kant: Duty to respect each person; equality/dignity of every human being

How are complex agents built?

Complex agents are built modularly in terms of interacting hierarchical layers.

CARE stands for....

Consider Analyze Review Evaluate

difference between knowledge base and database?

Database has well-defined tables for storing information (many rows with same schema) KB is more flexible than tabular representation - information does not need to fit into tables KB can represent information inferred from other information (info not actually stored in the KB)!

Symbolic reasoning was further developed by these other pioneers of in the philosophy of mind

Descartes (1596-1650), Pascal (1623-1662), Spinoza (1632-1677), Leibniz (1646-1716)

What was going on with AI in the 1960s and 70s?

During the 1960s and 1970s, success was had in building natural language understanding systems in limited domains. For example, the STUDENT program of Daniel Bobrow (1967) could solve high school algebra problems expressed in natural language. [Winograd (1972)]'s SHRDLU system could, using restricted natural language, discuss and carry out tasks in a simulated blocks world. CHAT-80 [Warren and Pereira (1982)] could answer geographical questions placed to it in natural language.

What happened with AI in the 1970s and 80s?

During the 1970s and 1980s, there was a large body of work on expert systems, where the aim was to capture the knowledge of an expert in some domain so that a computer could carry out expert tasks. For example, DENDRAL [Buchanan and Feigenbaum (1978)], developed from 1965 to 1983 in the field of organic chemistry, proposed plausible structures for new organic compounds. MYCIN [Buchanan and Shortliffe (1984)], developed from 1972 to 1980, diagnosed infectious diseases of the blood, prescribed antimicrobial therapy, and explained its reasoning. The 1970s and 1980s were also a period when AI reasoning became widespread in languages such as Prolog

What happened with AI in the 1990s and 2000s?

During the 1990s and the 2000s there was great growth in the subdisciplines of AI such as perception, probabilistic and decision-theoretic reasoning, planning, embodied systems, machine learning, and many other fields. There has also been much progress on the foundations of the field; these form the foundations of this book.

virtue ethics

Focus on character and habits of a morally excellent person (leading to ethical actions) Roots in ancient cultures and religions, also modern philosophers (Hume, Nietzsche) Example virtues: honesty, altruism

Is it easier to get an approximately optimal solution than an optimal solution?

For some problems, it is much easier computationally to get an approximately optimal solution than to get an optimal solution. However, for other problems, it is (asymptotically) just as difficult to guarantee finding an approximately optimal solution as it is to guarantee finding an optimal solution. Some approximation algorithms guarantee that a solution is within some range of optimal, but for some algorithms no guarantees are available.

Probable solution

For some problems, it is much easier computationally to get an approximately optimal solution than to get an optimal solution. However, for other problems, it is (asymptotically) just as difficult to guarantee finding an approximately optimal solution as it is to guarantee finding an optimal solution. Some approximation algorithms guarantee that a solution is within some range of optimal, but for some algorithms no guarantees are available.

Who is the Grandfather of AI?

Hobbes (1588-1679). He said thinking is symbolic reasoning like working out a problem answer with paper and pen or talking out loud.

Evaluate

How might the decision in this case be used as a precedent? What actions supported/violated the Code? Are the actions taken justified, particularly when considering the rights of and impact on all stakeholders?

rights

Idea that people have certain "negative" rights: right not to be deprived of freedom, right not to be harmed, ... "positive" rights (modern, more controversial): right to have employment, education, healthcare, ...

McCulloch and Pitts did what?

In 1943 they showed how a simple thresholding "formal neuron" could be the basis for a Turing-complete machine.

Samuel did what?

In 1952, he built a checkers program and implemented a program that learns how to play checkers in the 1950s

Newell and Simon did what?

In 1956 they built a program, Logic Theorist, that discovers proofs in propositional logic

What kind of "artificial" are we talking about?

In this context it means man-made, not fake.

Plane analogy

Making a plane ---> a plane, and more understanding of how a bird flies (we now know about aerodynamics) Making an AI ---> an AI, and more understanding of intelligence in general

Does an agent have direct access to its history?

No, but it does have direct access to what it has remembered (its belief state) and what it has just observed.

Satisficing solution

Often an agent does not need the best solution to a problem but just needs some solution. A satisficing solution is one that is good enough according to some description of which solutions are adequate. For example, a person may tell a robot that it must take all of trash out, or tell it to take out three items of trash.

Approximately optimal solution

One of the advantages of a cardinal measure of success is that it allows for approximations. An approximately optimal solution is one whose measure of quality is close to the best that could theoretically be obtained. Typically agents do not need optimal solutions to problems; they only must get close enough. For example, the robot may not need to travel the optimal distance to take out the trash but may only need to be within, say, 10% of the optimal distance.

Rosenblatt did what?

One of the early significant works was the Perceptron of Rosenblatt (1958).

What are the four common classes of solutions?

Optimal solution Satisficing solution Approximately optimal solution Probable solution

Prolog - non-imperative

Order of statements is NOT order of execution (unlike C or Java) No language syntax for controlling execution - no if-then-else, for-loops, blocks ("curly braces")

why teach ai ethics?

Prepare you (future AI developers) to recognize and analyze ethical problems in your career in AI

Prolog - declarative

Program consists of facts and rules that can be used to answer many different questions Ex. fact: bird(tweety). Ex. rule: cardinal(X) :- bird(X), red(X).

Consider

Relevant actors and stakeholders? Effects of decisions for stakeholders? Additional details that would provide greater understanding of situational context?

What is the Turing test?

The Turing test consists of an imitation game where an interrogator can ask a witness, via a text interface, any question. If the interrogator cannot distinguish the witness from a human, the witness must be intelligent.

What kind of intelligence are we interested in?

The definition is not for intelligent THOUGHT. We are only interested in THINKING intelligently insofar as it leads to better performance. The role of thought is to affect action.

Artificial Intelligence

The field that studies the synthesis and analysis of computational agents that act intelligently

Minsky did what?

The first learning for these neural networks was described by Minsky (1952)

What is the motivation for the Turing test?

The idea of external behavior defining intelligence. An agent that is not really intelligent could not fake intelligence for arbitrary topics.

What happened to work on neural networks after Minsky and Papert's book?

The work on neural networks went into decline for a number of years after the 1968 book by Minsky and Papert (1988), which argued that the representations learned were inadequate for intelligent action.

Measuring solution quality

This measure is typically specified as an ordinal, where only the order matters. However, in some situations, such as when combining multiple criteria or when reasoning under uncertainty, you need a cardinal measure, where the relative magnitudes also matter. An example of an ordinal measure is for the robot to take out as much trash as possible; the more trash it can take out, the better. As an example of a cardinal measure, you may want the delivery robot to take as much of the trash as possible to the garbage can, minimizing the distance traveled, and explicitly specify a trade-off between the effort required and the proportion of the trash taken out. It may be better to miss some trash than to waste too much time.

Compare actions of two deterministic agents

Two deterministic agents with the same prior knowledge, history, abilities, and goals should do the same thing. Changing any one of these can result in different actions.

Winograd schema

Two parties referred to in a sentence Pronoun could refer to either party Knowledge about world needed to decide which is the pronoun's referent ex. "The trophy doesn't fit in the suitcase because it is too big/small. " What is too big/small?

Some of the questions that must be considered when given a problem or a task are the following

What is a solution to the problem? How good must a solution be? How can the problem be represented? What distinctions in the world are needed to solve the problem? What specific knowledge about the world is required? How can an agent acquire the knowledge from experts or from experience? How can the knowledge be debugged, maintained, and improved? How can the agent compute an output that can be interpreted as a solution to the problem? Is worst-case performance or average-case performance the critical time to minimize? Is it important for a human to understand how the answer was derived?

Review

What responsibilities, authority, practices, or policies shaped the actors' choices? What potential actions could have changed the outcomes?

Analyze

What stakeholder rights were impacted? What technical facts are most relevant to actors' decision? What principles of the Code were most relevant? What personal, institutional, or legal values should be considered?

What are questions you should ask when analyzing an ethics case study?

Who is affected by the AI? What is the impact on them? What ethical questions are raised by the case study? Does the AI behave ethically? Why or why not? How do different ethical theories or codes explain your answer?

Is an expert system + a human who provides perceptual info and carries out the task an environment?

Yes it is.

robot

a coupling of a computational engine with physical sensors and actuators

Software agent

a program that acts in a purely computational environment

knowledge representation

abbreviated as KR, it's representing knowledge in a form that the computer can reason with (e.g. first-order logic in CSC350)

knowledge level of abstraction in ai

agent's knowledge, beliefs, and goals (e.g. Prolog facts and rules) - does not specify how a solution will be computed

Expert system

an advice-giving computer

Where does human intelligence come from?

biology: Humans have evolved into adaptable animals that can survive in various habitats. culture: Culture provides not only language, but also useful tools, useful concepts, and the wisdom that is passed from parents and teachers to children. life-long learning: Humans learn throughout their life and accumulate knowledge and skills.

software engineer

builds software such as the inference engine (e.g. implements Prolog theorem prover in C) and user interface

Turing machine

by Alan Turing (1912-1954), a theoretical machine that writes symbols on an infinitely long tape

lambda calculus of Church

by Church (1903-1995). a mathematical formalism for rewriting formulas

how can facts and rules be acquired for a knowledge base?

by a human engineer or it can be learned from experience

utilitarian ethics

choose the action that leads to the greatest amount of good for the greatest number balances good and bad consequences over the long term

symbol level of abstraction in ai

computer manipulates symbols at this level, e.g., what the Prolog interpreter does (unification and theorem-proving algorithms)

what are some general ethical thoeries?

deontology rights virtue ethics utilitarianism

To solve a problem, the designer of a system must...

flesh out the task and determine what constitutes a solution; represent the problem in a language with which a computer can reason; use the computer to compute an output, which is an answer presented to a user or a sequence of actions to be carried out in the environment; and interpret the output as a solution to the problem.

effectively computable

following well-defined operations

how has linguistics benefited from ai?

formal grammars, models of human language understanding and production

how has philosophy benefited from AI?

formal logics (FOL, modal logic, non-monotonic reasoning)

domain expert

gives the knowledge engineer information about the domain (e.g. medicine); no expertise in AI or software

which is easier for us to work with: a higher level of abstraction, or a lower level of abstraction in AI?

higher level of abstraction

ai ethics

how (human) developers of AI artifacts should act as well as how (artificial) intelligent agents should act

the knowledge base was originally seen as a model of what?

human "long-term memory"

knowledge engineer

manually encode facts and rules in KR languages such as Prolog

how has cognitive psychology benefited from ai?

models of human problem-solving, language, vision, learning, emotion/personality

AI has been a ___________ disciplinary field of computer science since the beginning (~1956)

multi

when is the knowledge base usually built?

offline (before the agent needs to solve any problems)

What an agent does depends on its...

prior knowledge about the agent and the environment; history of interaction with the environment, which is composed of observations of the current environment and past experiences of previous actions and observations, or other data, from which it can learn; goals that it must try to achieve or preferences over states of the world; and abilities, which are the primitive actions it is capable of carrying out.

how has mathematics benefited from ai?

reasoning under uncertainty

epistemology

study of knowledge

How can we understand the principles that make intelligent behavior possible in natural or artificial systems?

the analysis of natural and artificial agents; formulating and testing hypotheses about what it takes to construct intelligent agents; and designing, building, and experimenting with computational systems that perform tasks commonly viewed as requiring intelligence.

What is the central engineering goal of AI?

the design and synthesis of useful, intelligent artifacts Commercial applications ("low hanging fruit") Solutions to narrow, well-defined problems e.g. facial recognition, sorting packages, classifying tweets as pro- or anti- issue Not required to have transparent solution Often uses Machine Learning methods Often subject of "hyped" (exaggerated, misleading) claims in news

representation scheme

the form of the knowledge that is used in an agent.

Knowledge

the information about a domain that can be used to solve problems in that domain.

A representation of some piece of knowledge is...

the internal representation of the knowledge

knowledge base

the representation of all of the knowledge that is stored by an agent.

user

through user interface, asks the system to solve a certain problem

The scientific goal of AI is to...

understand the principles that make intelligent behavior possible in natural or artificial systems. Studying natural intelligence (cognitive science) Building computational models that can do things requiring intelligence, e.g., solving high school physics problems Solution must be transparent (can see how it works) Often uses "knowledge-based" methods (instead of "big data")

inference engine

uses KB and information about current state (from sensors or users) to solve problems "online" Example: the Prolog interpreter is an inference engine that uses a kind of pattern matching ("unification") and theorem-proving to answer queries from the user output of inference engine could be updates to KB, output to user (via interface), or robot actions

"Much work in AI is motivated by commonsense reasoning" means

we want the computer to be able to make commonsense conclusions about the unstated assumptions

issues with utilitarian ethics

what is "good"? how do we balance good and bad?

An agent acts intelligently when...

what it does is appropriate for its circumstances and its goals, it is flexible to changing environments and changing goals, it learns from experience, and it makes appropriate choices given its perceptual and computational limitations. An agent typically cannot observe the state of the world directly; it has only a finite memory and it does not have unlimited time to act.

Given a well-defined problem, the next issue is...

whether it matters if the answer returned is incorrect or incomplete. For example, if the specification asks for all instances, does it matter if some are missing? Does it matter if there are some extra instances?

Can you fake intelligence?

you cannot have fake intelligence. If an agent behaves intelligently, it is intelligent. It is only the external behavior that defines intelligence; acting intelligently is being intelligent.


संबंधित स्टडी सेट्स

Brain and cranial nerves Chap14 (images)

View Set

SHRM BOCK: Key Terms, SHRM Code of Ethics, Legal and regulatory influences - FINAL SCP, Relationship Management, Global and Cultural Effectiveness, Business Acumen - Organizational budgeting, Business Acumen - organizational compensation, Consultatio...

View Set

ECON Exam 1- Ch. 4-5 Study Questions & Explanations

View Set