A.I Midterm - CH 1

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

Agent program

implements the agent function

Problems with Simple reflex agents are :

- Very limited intelligence. - No knowledge of non-perceptual parts of state. - Usually too big to generate and store. - If there occurs any change in the environment, then the colleterm-9ction of rules need to be updated.

Learning Agent

A learning agent in AI is the type of agent which can learn from its past experiences or it has learning capabilities. It starts to act with basic knowledge and then able to act and adapt automatically through learning. A learning agent has mainly four conceptual components, which are: - Learning element :It is responsible for making improvements by learning from the environment - Critic: Learning element takes feedback from critic which describes how well the agent is doing with respect to a fixed performance standard. - Performance element: It is responsile for selecting external action - Problem Generator: This component is responsible for suggesting actions that will lead to new and informative experiences.

Simple Reflex Agent, put an example

A simple reflex agent responds to current conditions with pre-determined actions, thanks to the condition-action rule. A condition-action rule is a rule that maps a state i.e, condition to an action. If the condition is true, then the action is taken, else not. This agent function only succeeds when the environment is fully observable.

very agent is rational in an un-observable environment.

False. Built-in knowledge can give a rational agent in an un-observable environment. A vacuum-agent that cleans, moves, cleans moves would be rational, but one that never moves would not be.

Every agent function is implementable by some program/machine combination

False. Consider an agent whose only action is to return an integer, and who perceives a bit each turn. It gains a point of performance if the integer returned matches the value of the entire bit string perceived so far. Eventually, any agent program will fail because it will run out of memory.

The input to an agent program is the same as the input to the agent function

False. The input to a agent function is the percept history. The input to a agent program is only the current percept; it is up to the agent program to record any relevant history needed to make actions

T/F: An agent that senses only partial information about the state cannot be perfectly rational.

False. The vacuum-cleaning agent is rational but doesn't observe the state of the square that is adjacent to it

Properties of Task Environments are:

Fully observable (vs. partially observable): An agent's sensors give it access to the complete state of the environment at each point in time Deterministic(vs. stochastic): Deterministic: If the agent can determine what will be the next state. Stochastic: If the agent cannot determine what will happen. There is randomness in the environment. Episodic (vs. sequential): Episodic: It's divided into episodes, which the Agent receives feedback from environment and then takes a decision. (e.g: vacuum) Sequential: Current decision could affect future decisions. (e.g: Chess, Taxi driving) Static (vs. dynamic): Is the env. changing? Discrete(vs. continuous): This is in relation to how time is handled, fixed values, ex: Discrete: Cross words, poker Continuous: Taxi driving Single agent(vs. multiagent): An agent operating by itself in an environment Competitive vs. cooperative

Model-based Agent

It works by finding a rule whose condition matches the current situation. A model-based agent can handle partially observable environments by use of model about the world. The agent has to keep track of internal state which is adjusted by each percept and that depends on the percept history. The current state is stored inside the agent which maintains some kind of structure describing the part of the world which cannot be seen. Updating the state requires information about : - how the world evolves in-dependently from the agent, anterm-9d - how the agent actions affects the world.

Give a PEAS description for: Practicing tennis against a wall

P- Improved performance in future tennis matches E- Near a wall A- Tennis racquet, Legs S- Eyes, Ears. observable, single agent,stochastic, sequential, dynamic, continuous, unknown

Give a PEAS description for:Bidding on an item at an auction

P- Item acquired, Final price paid for item E- Auction House (or online) A- Bidding S- Eyes, Ears. Partially observable,multi agent, stochastic (tie-breaking for two simultaneous bids), episodic, dynamic,continuous, known

Give a PEAS description of Playing soccer.

P- Win/Lose E- Soccer field A- Legs,Head,Upper body S- Eyes, Ears.partially observable, multi agent, stochastic, sequential, dynamic, continuous, un-known

Give a PEAS description for: Playing a tennis match

P- Win/Lose E- Tennis court A- Tennis racquet, Legs S- Eyes, Ears. partially observable, multi agent, stochastic, sequential, dynamic,continuous, unknown

Give a PEAS description of: Medical Diagnosis System

P- healthy patient, minimize costs, lawsuits E- patient, hospital, staff A- display questions, tests, diagnosis, treatments, referrals S- keyboard entry of symptoms, findings, patient's answers

Give a PEAS description of: Taxi Driver

P- safe, fast, legal, comfortable trip, maximize profits E- roads, traffic, pedestrians, weather A- steering, accelerator, brake, signal, horn, display S- camera, sonar, speedometer, GPS, odometer, engine sensors, keyboard, accelerator

Give a PEAS description for: Autonomous Mars rover

P: Terrain explored and reported, samples gathered and analyzed E: Launch vehicle, lander, Mars A: Wheels/legs, sample collection device, analysis devices, radio transmitter S: Camera, touch sensors, accelerators, orientation sensors, wheel/joint encoders, radio receiver

Give a PEAS and description of: Playing soccer

P: Winning game, goals for/against E: Field, ball, own team, other team, own body A: Devices (e.g., legs) for locomotion and kicking S: Camera, touch sensors, accelerometers, orientation sensors, wheel/joint encoders

Give a PEAS description for: Mathematician's theorem-proving assistant

P: good math knowledge, can prove theorems accurately and in minimal steps/time E: Internet, library A: display S: keyboard

Utility-based agents

The agents which are developed having their end uses as building blocks are called utility based agents. When there are multiple possible alternatives, then to decide which one is best, utility-based agents are used.They choose actions based on a preference (utility) for each state. Sometimes achieving the desired goal is not enough. We may look for a quicker, safer, cheaper trip to reach a destination. Agent happiness should be taken into consideration. Utility describes how "happy" the agent is. Because of the uncertainty in the world, a utility agent chooses the action that maximizes the expected utility. A utility function maps a state onto a real number which describes the associated degree of happiness.

Goal based agent

These kind of agents take decision based on how far they are currently from their goal(description of desirable situations). Their every action is intended to reduce its distance from the goal. This allows the agent a way to choose among multiple possibilities, selecting the one which reaches a goal state. The knowledge that supports its decisions is represented explicitly and can be modified, which makes these agents more flexible. They usually require search and planning. The goal-based agent's behavior can easily be changed.

here exists a task environment in which every agent is rational.

True. Consider a task environment in which all actions (including no action) give the same, equal reward

here exist task environments in which no pure reflex agent can behave rationally.

True. The card game Concentration or Memory is one. Anything where memoryis required to do well will thwart a reflex agent.

An agent: −Perceives its _____ , −Through its _______, −Then achieves its _______ −By acting on its environment via ______

environment sensors goals actuators

Give a PEAS and description of: Performing a high jump

- P: Clearing the jump - E: Track - A: Legs, Body - S: eyes

Give a PEAS and description of: Shopping for used AI books on the Internet

- P: Cost of book, quality/relevance/correct edition - E: Internet's used book shop - A: key entry, cursor - S: website interfaces, browser

Give a PEAS and description of: Bidding on an item at an auction

- P: Item acquired, final price paid for item - E: Auction House (or online) - A: Bidding - S: Eyes and ears

Give a PEAS and description of: Playing a tennis match

- P: Win/Lose - E: Tennis court - A: Tennis racquet, Legs - S: Eyes, Ears

An agent is anything that can be viewed as :

- perceiving its environment through sensors and - acting upon that environment through actuators

Rational agent

A rational agent could be anything which makes decisions, as a person, firm, machine, or software. It carries out an action with the best outcome after considering past and current percepts

Agents and Artificial Intelligence

In artificial intelligence, an intelligent agent (IA) refers to an autonomous entity which acts, directing its activity towards achieving goals (i.e. it is an agent), upon an environment using observation through sensors and consequent actuators (i.e. it is intelligent).

Autonomy

Intelligent agent operating on an owner's behalf but without any interference of that ownership entity

Agent Function

Map from the percept sequence(history of all that an agent has perceived till date) to an action.

Give a PEAS description for: Performing a high jump

P- Clearing the jump or not E- Track A- Legs, Body S- Eyes. observable, single agent, stochastic, sequential,dynamic, continuous,unknown

Give a PEAS description of Shopping for used AI books on the Internet.

P- Cost of book, quality/relevance/correct edition E- Internet's used book shops A- key entry, cursor S- website interfaces,browser. partially observable, multi agent, stochastic, sequential, dynamic, continuous, unknown

Give a PEAS description for: Knitting (tejer) a sweater

P- Quality of resulting sweater E- Rocking chair A- Hands,Needles S- Eyes. observable, single agent, stochastic, sequential,dynamic, continuous, unknown

What PEAS stands for?

Performance -which qualities it should have? Environment -where it should act? Actuators -how will it perform actions? Sensors -how will it perceive environment?

logical reasoning

The process of arriving at a conclusion through a series of ordered steps

Rationality

logic and reasoning

intelligence

the ability to learn from experience, solve problems, and use knowledge to adapt to new situations


Ensembles d'études connexes

Foundations of American Democracy

View Set

Patho Ch. 14 Alterations in nutrition

View Set