AI Midterm

Ace your homework & exams now with Quizwiz!

AI Acting Humanly

AI means acting humanly, i.e., acting like a person. The classic example of this is the "Turing test" only concerned with the actions, the outcome or product of the human's thinking process, not the process itself

Performance Measures

An objective criterion for success of an agent's behavior. Agents can use performance measures to improve its behavior for maximizing its goal achievement accuracy precision recall

Machine Learning

Leverages massive amounts of data so that computers can act and improve on their own without additional programming.

Model-based Agent

Model-based reflex agents are made to deal with partial accessibility; they do this by keeping track of the part of the world it can see now. It does this by keeping an internal state that depends on what it has seen before so it holds information on the unobserved aspects of the current state. This time out mars Lander after picking up its first sample, it stores this in the internal state of the world around it so when it come across the second same sample it passes it by and saves space for other samples.

Sensors

collect data about the environment

agentive technology

different lighting for different activities (dimming the light if you are watching a movie) Agentive technology does things on your behalf, while allowing you to turn your attention elsewhere. This is an emerging category of technology, which will need new approaches to user experience design.

dynamic environment

dynamic AI environments such as the vision AI systems in drones deal with data sources that change quite frequently. taxi driving is dynamic

Utilitarianism

idea that the goal of society should be to bring about the greatest happiness for the greatest number of people -Utility in AI: how happy you are -Comparing the sum of individual utility over all people in society as a result of each ethical choice.

natural language processing

uses AI techniques to enable computers to generate and understand natural human languages, such as English

AI Acting Rationally

"do the right thing" Acting rationally means acting to achieve one's goals, given one's beliefs or understanding about the world. An agent is a system that perceives an environment and acts within that environment. An intelligent agent is one that acts rationally with respect to its goals. For example, an agent that is designed to play a game should make moves that increase its chances of winning the game.

Artificial Narrow Intelligence

(Narrow AI) is AI that is programmed to perform a single task — whether it's checking the weather, being able to play chess, or analyzing raw data to write journalistic reports. ANI systems can attend to a task in real-time, but they pull information from a specific data-set. As a result, these systems don't perform outside of the single task that they are designed to perform.

accuracy

(The number of cases AI is correct) divided by (total number of cases tested) ○ Example: if AI answers 60 questions correctly out of 100, then accuracy is 60%.

Clustering

(unsupervised learning) The target features are not given in the training examples. The aim is to construct a natural classification that can be used to cluster the data. The general idea behind clustering is to partition the examples into clusters or classes. Each class predicts feature values for the examples in the class.

The Ethics Commissions' Report

-Automated and connected driving is an ethical imperative if the systems cause fewer accidents than human drivers (positive balance of risk). -Damage to property must take precedence over personal injury. In hazardous situations, the protection of human life must always have top priority. -In the event of unavoidable accident situations, any distinction between individuals based on personal features (age, gender, physical or mental constitution) is impermissible. -In every driving situation, it must be clearly regulated and apparent who is responsible for the driving task: the human or the computer. -It must be documented and stored who is driving (to resolve possible issues of liability, among other things). -Drivers must always be able to decide themselves whether their vehicle data are to be forwarded and used (data sovereignty).

AI privacy

-Via sensors on the AI-driven consumer products without the knowledge or consent of the user -AI that is used to identify people who wish to remain anonymous - AI that is used to infer information about people from their non-sensitive data -AI that is used to profile people based upon population-scale data and make consequential decisions.

deterministic model

A model in which all uncontrollable inputs are known and cannot vary. the next state of the environment is completely predictable from the current state and the action executed by the agent

Virtue Ethics

A moral theory that focuses on the development of virtuous character. Focus on individual character. -How one develops good qualities/virtues (such as honesty or courage) and the ability to apply them.

Turing Test

A test proposed by Alan Turing in which a machine would be judged "intelligent" if the software could use conversation to fool a human into thinking it was talking with a person instead of a machine.

AI thinking rationally

AI modeling how how we should think. • The "thinking rationally" approach to AI uses symbolic logic to capture the laws of rational thought as symbols that can be manipulated. • Reasoning involves manipulating the symbols according to well-defined rules, kind of like algebra. • The result is an idealized model of human reasoning. This approach is attractive to theorists, i.e., modeling how humans should think and reason in an ideal world.

bias in AI

AI systems are only as good as the data we put into them. Bad data can contain implicit racial, gender, or ideological biases. Many AI systems will continue to be trained using bad data, making this an ongoing problem.

goal-based agent

Achieving the goals can take 1 action or many actions. Search and planning are two subfields in AI devoted to finding sequences of actions to achieve an agents goals. Unlike the previous reflex agents before acting this agent reviews many actions and chooses the one which come closest to achieving its goals, whereas the reflex agents just have an automated response for certain situations. Although the goal-based agent does a lot more work that the reflex agent this makes it much more flexible because the knowledge used for decision making is is represented explicitly and can be modified. For example if our mars Lander needed to get up a hill the agent can update it's knowledge on how much power to put into the wheels to gain certain speeds, through this all relevant behaviors will now automatically follow the new knowledge on moving. However in a reflex agent many condition-action rules would have to be re-written.

Rational Agent

Agent that acts to maximize its expected performance measure RATIONAL AGENT change, and create and pursue goals. A rational agent is one that acts so as to achieve the best outcome or, when there is uncertainty, the best expected outcome.

Algorithm

Algorithms are shortcuts people use to tell computers what to do. At its most basic, an algorithm simply tells a computer what to do next with an "and," "or," or "not" statement. When chained together, algorithms - like lines of code - become more robust. They're combined to build AI systems like neural networks. Since algorithms can tell computers to find an answer or perform a task, they're useful for situations where we're not sure of the answer to a question or for speeding up data analysis.

Learning Agent

Another way of going about creating an agent is to get it to learn new actions as it goes about its business, this still requires some initial knowledge but cuts down on the programming greatly. This will allow the agent to work in environments that are unknown. A learning agent can be split into the 4 parts. The learning element is responsible for improvements this can make a change to any of the knowledge components in the agents. One way of learning is to observe pairs of successive states in the percept sequence; from this the agent can learn how the world evolves. The performance element is responsible for selecting external actions, and this is considered to be the previous agents discussed. The learning agent gains feedback from the critic on how well the agent is doing and determines how the performance element should be modified if at all to improve the agent. The last component is the problem generator, the performance generator only suggests actions that it can already do so we need a way of getting the agent to experience new situations. The problem generator is responsible for suggesting actions that will lead to new and informative experiences.

Artificial General Intelligence

Artificial General intelligence or "Strong" AI refers to machines that exhibit human intelligence. In other words, AGI can successfully perform any intellectual task that a human being can. This is the sort of AI that we see in movies like "Her" or other sci-fi movies in which humans interact with machines and operating systems that are conscious, sentient, and driven by emotion and self-awareness.

Artificial Superintelligence

Artificial Super Intelligence (ASI) will surpass human intelligence in all aspects — from creativity, to general wisdom, to problem-solving. Machines will be capable of exhibiting intelligence that we haven't seen in the brightest amongst us. This is the type of AI that many people are worried about, and the type of AI that people like Elon Musk think will lead to the extinction of the human race.

Feedback

Every system has to give feedback to the user, and this feedback should e immediate, informative, and enough: not too little, not too much. A good feedback is the one that is planned, and that makes the user to understand and really use it to plan his next actions.

stochastic model

For a given current state and action executed by agent, the next state or outcome cannot be exactly determined, for e.g., if agent kicks the ball in a particular direction, then the ball may or may not be stopped by other players, or the soccer field can change in many different ways depending on how players move

Fully observable environment

If the agent's sensors give it complete access to the state of the task environment, then it is fully observable; easier to deal with because the agent need not maintain state between decisions; Crossword puzzle

Precision vs recall

Precision is a good measure to determine, when the costs of False Positive is high. For instance, email spam detection. In email spam detection, a false positive means that an email that is non-spam (actual negative) has been identified as spam (predicted spam). The email user might lose important emails if the precision is not high for the spam detection model. Recall shall be the model metric we use to select our best model when there is a high cost associated with False Negative. For instance, in fraud detection. If a fraudulent transaction (Actual Positive) is predicted as non-fraudulent (Predicted Negative), the consequence can be very bad for the bank.

General Data Protection Regulation (GDPR)

Proposed set of regulations adopted by the European Union to protect Internet users from clandestine tracking and unauthorized personal data usage.

Signifiers

Signifiers are communication devices for designers to let users know what the object can do. Examples: traffic signs, signs saying "push", arrows...

designing agentive tech

Step 1. Setting Up the Agent -Conveying capability - Conveying limitations -No physical affordances - Getting user's goals, preferences, and permissions Step 2. During the Run Time -Provide information for monitoring- Allowing users to see the current status of the system. Providing information about why the agent took a specific action (transparency). - Notifications: Letting users know if the agent is running, when it has completed tasks, and when it has concerns. Step 3 Handling Exceptions -refine false positives - expose false negatives Step 4: allowing handoff - Handing off to an intermediate -Handing off to a user -Takeback

computer vision

Techniques to let computers and robots see and understand the world around them.

Partially Observable Environment

The agent's sensors do not have complete access to the state of the task environment; poker

Deontological Ethics

The idea that actions are right and wrong in themselves independently of any consequences

speech recognition

The process by which computers recognize voice patterns and words, and then convert them to digital data.

AI Thinking humanly

Thinking humanly means trying to understand and model how the human mind works. • There are (at least) two possible routes that humans use to find the answer to a question: - We reason about it to find the answer. This is called "introspection". - We conduct experiments to find the answer, drawing upon scientific techniques to conduct controlled experiments and measure change. • The field of Cognitive Science focuses on modeling how people think.

Simple Reflex Agent

This agent selects actions based on the agents current percept or the world and not based on past percept. For example if a mars lander found a rock in a specific place it needed to collect then it would collect it, if it was a simple reflex agent then if it found the same rock in a different place it would still pick it up as it doesn't take into account that it already picked it up.

conceptual model

a verbal or graphical explanation for how a system works or is organized Based on the design, the user collects system information via touch, looking at its shape, and/or reading the manual to build the conceptual model.

reinforcement learning

algorithms are able to learn from experience. they are not given specific goals, except to maximize some reward The pro of reinforcement learning is that there is a balance between trying what has worked in the past and trying new things to seek further improvement. This means the algorithm is likely to try new actions or classifications in an incremental format and will more likely discover new insights and ways of doing things. Standard supervised learning algorithms can't achieve this balance. A potential con could be that you can't incorporate explicit rules later on as you can with supervised learning (e.g. stop at a red light) and that a lot of data inputs may be necessary for the machine to receive the proper feedback. Reinforcement learning can also be quite difficult to implement and requires much expertise.

Unsupervised Learning

algorithms in which data is not labeled or organized ahead of time. Instead relationships must be discovered without human intervention A pro here is that you do not need a person to label the examples or patterns and therefore people are not involved in the training. This can also be a con because there is no human interaction to train the machine and initially it will not know if the classifications it makes are right or wrong. There can be more erroneous results initially. The patterns and clusters discovered may or may not be of value to you - this again can be a pro or con. You may discover trends you were not looking for, but you may also not get the results you desire.

Supervised Learning

algorithms use data that has already been labeled or organized -- human input is required to be able to provide feedback An example of this could be teaching a machine to recognize a picture of a dog. You would train the machine by showing it pictures of various breeds of dogs, labeled as dogs as compared to pictures of cats, labeled as cats. One pro for this approach is that the system can be better controlled and the accuracy typically increases with the number of labeled examples or patterns provided. On the flip side, qualified people need to label the examples or patterns to be used for training. This can be very time consuming and labor intensive and there are limits to scalability with this approach.

Actuators

allow for something to act on an environment

percept

refer to the agent's perceptual inputs at any given instant

static environment

static AI environments rely on data-knowledge sources that don't change frequently over time. Speech analysis is a problem that operates on static AI environments. crossword puzzles are static

Assistive Technology

switch (manually or by voice) to turn on/off the light. show you the electricity consumption of each light bulb.

Artificial Intelligence

the ability of machines to use algorithms to learn from data, and use what has been learned to make decisions like a human would The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision making, and translation between languages

Affordances

the possibilities for action offered by objects and situations

Discoverability

the product must be self explanatory, the user should be able to discover what actions are possible and how to perform them

automatic technology

turn the light on at 6pm and off at 11pm.


Related study sets

Unit 8 Post-War Boom & Cold War Conflict

View Set

Chapter 30: Environmental Emergencies (Questions)

View Set

Business Ethics Midterm Exam - Dr. Jiang

View Set

Psychology 201 Final - Oregon State University

View Set