Term
|
Definition
Performance Measure Environment Actuators Sensors |
|
|
Term
|
Definition
For each possible percept sequence, a rational agent should select an action that is ex- pected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has. |
|
|
Term
|
Definition
As a general rule, it is better to design performance measures according to what one actually wants in the environment, rather than according to how one thinks the agent should behave. |
|
|
Term
|
Definition
To the extent that an agent relies on the prior knowledge of its designer rather than on its own percepts, we say that the agent lacks autonomy. A rational agent should be AUTONOMY autonomous—it should learn what it can to compensate for partial or incorrect prior knowl- edge. For example, a vacuum-cleaning agent that learns to foresee where and when additional dirt will appear will do better than one that does not. As a practical matter, one seldom re- quires complete autonomy from the start: when the agent has had little or no experience, it would have to act randomly unless the designer gave some assistance. So, just as evolution provides animals with enough built-in reflexes to survive long enough to learn for themselves, it would be reasonable to provide an artificial intelligent agent with some initial knowledge |
|
|
Term
|
Definition
|
|
Term
Task Environment properties |
|
Definition
Fully observable vs. partially observable: If an agent's sensors give it access to the complete state of the environment at each point in time, then we say that the task environ-ment is fully observable.
Single agent vs. multiagent |
|
|
Term
Multi Agent Task Environment vs 1 agent in a multi object/factor Task environment |
|
Definition
The key distinction is whether B's behavior is best described as maximizing a performance measure whose value depends on agent A's behavior. For example, in chess, the opponent entity B is trying to maximize its performance measure, which, by the rules of chess, minimizes agent As per-formance measure. Thus, chess is a competitive multiagent environment. |
|
|
Term
Deterministic Task Environment |
|
Definition
If the next state of the environment is completely deter-mined by the current state and the action executed by the agent, then we say the environment is deterministic; otherwise, it is stochastic. |
|
|
Term
Uncertain Task Environment |
|
Definition
We say an environment is uncertain if it is not fully observable or not deterministic. |
|
|
Term
Difference between stochastic task environment and none deterministic task environment |
|
Definition
Stochastic environment often is characterized by probabilities instead of certainties. None deterministic is not characterized by probabilities but has a set of possible outcomes. |
|
|
Term
Episodic task environment |
|
Definition
current precept and action by agent has no connection with either past or future precepts or actions |
|
|
Term
Sequential task environment |
|
Definition
Agents current precept and action has a connection to either or both future and past precepts and actions. |
|
|
Term
Static versus Dynamic task environment |
|
Definition
If the environment changes while the agent is deliberating, it's dynamic, otherwise it's static. |
|
|
Term
Discrete vs Continuous task environment |
|
Definition
digital vs analog applied to the environment state, such as chess positions vs the dashboard dials on a robot taxi. |
|
|
Term
Known vs Unknown task environment |
|
Definition
Whether the laws of physics or operation in the environment or known or unknown. Similar to encountering a video game and it's unknown buttons for the first time. Different than observable vs unobservable environment |
|
|
Term
Hardest case task environment |
|
Definition
the hardest case is partially observable, multiagent, stochastic, sequential, dynamic, continuous, and unknown. |
|
|
Term
Observable vs Unobservable task environment |
|
Definition
Self describes. Also semi observable. |
|
|
Term
|
Definition
|
|
Term
What is an agent function? |
|
Definition
The set of all input to output matches |
|
|
Term
What is an agent program? |
|
Definition
A program that embodies the agent function |
|
|
Term
Why is a table driven agent program not realistic for the large majority of agents? |
|
Definition
Because the table would end up being too big to store or create for most agent functions. |
|
|
Term
What is a key challenge for AI regarding methods for creating an agent program for a given agent function? |
|
Definition
The key challenge for AI is to find out how to write programs that,to the extent possible, produce rational behavior from a smallish program rather than from a vast table. We have many examples showing that this can be done successfully in other areas: for example, the huge tables of square roots used by engineers and schoolchildren prior to the 1970s have now been replaced by a five-line program for Newton's method naming on electronic calculators. The question is, can AI do for general intelligent behavior what Newton did for square roots? We believe the answer is yes. |
|
|
Term
What is a simple reflex agent? |
|
Definition
These agents select actions on the basis SIMPLE REFLEX AGENT of the current percept, ignoring the rest of the percept history. |
|
|
Term
what is a model based reflex agent? |
|
Definition
it tracks world state in it's memory, in particular currently unobservable factors as they were last observed. It's action reflex is determined not only by the current percept, but also by it's memory of the world state. |
|
|
Term
What is a goal based agent? |
|
Definition
It uses both it's internal model and a set of goals to to choose actions which are best suited to achieve it's goals. It does that by attempting to answer these questions: "What will happen if I do such-and-such?" and "Will that make me happy?" |
|
|
Term
What is a utility based agent? |
|
Definition
Partial observability and stochasticity are ubiquitous in the real world, and so, therefore,is decision making under uncertainty. Technically speaking, a rational utility-based agent chooses the action that maximizes the expected utility of the action outcomes—that is, the EXPECTED UTILITY utility the agent expects to derive, on average, given the probabilities and utilities of each outcome. |
|
|
Term
What is a learning based agent? |
|
Definition
Learning in intelligent agents can be summarized as a process of modification of each component of the agent to bring the components into closer agreement with the available feedback information,(in pursuit of goals and utility) thereby improv- ing the overall performance of the agent. |
|
|
Term
What are the 4 parts of a learning agent? |
|
Definition
Critic, Learning Element, Problem Generator, Performance Element |
|
|
Term
What is the purpose of the problem generator in the learning agent? |
|
Definition
It varies the actions chosen by the performance element for the purpose of learning more about the environment. Such as for example stepping on the brakes of a robot car in order to see how fast they stop the car on a wet road. |
|
|
Term
Ch 3 What is a problem solving agent? |
|
Definition
A type of goal based agent in which the world state is atomic. |
|
|
Term
What is a planning agent? |
|
Definition
a goal based agent that uses a factored or structured world representation. |
|
|
Term
Ch 3 pre-review What is polynomial time? |
|
Definition
In computer science, the time complexity of an algorithm quantifies the amount of time taken by an algorithm to run as a function of the size of the input to the problem. The time complexity of an algorithm is commonly expressed using big O notation, which suppresses multiplicative constants and lower order terms. When expressed this way, the time complexity is said to be described asymptotically, i.e., as the input size goes to infinity. For example, if the time required by an algorithm on all inputs of size n is at most 5n3 + 3n, the asymptotic time complexity is O(n3). |
|
|
Term
ch 3 pre-review What is a complexity class? |
|
Definition
In computational complexity theory, a complexity class is a set of problems of related resource-based complexity. |
|
|
Term
ch 3 pre-review What is a none deterministic algorithm? |
|
Definition
In computer science, a nondeterministic algorithm is an algorithm that can exhibit different behaviors on different runs, as opposed to a deterministic algorithm. |
|
|
Term
ch 3 pre-review What is NP and NP-Complete? |
|
Definition
In computational complexity theory, NP is one of the most fundamental complexity classes. The abbreviation NP refers to "nondeterministic polynomial time."
Intuitively, NP is the set of all decision problems for which the instances where the answer is "yes" have efficiently verifiable proofs of the fact that the answer is indeed "yes." More precisely, these proofs have to be verifiable in polynomial time by a deterministic Turing machine. In an equivalent formal definition, NP is the set of decision problems where the "yes"-instances can be recognized in polynomial time by a non-deterministic Turing machine. The equivalence of the two definitions follows from the fact that an algorithm on such a non-deterministic machine consists of two phases, the first of which consists of a guess about the solution which is generated in a non-deterministic way, while the second consists of a deterministic algorithm which verifies or rejects the guess as a valid solution to the problem.[2]
The complexity class P is contained in NP, but NP contains many important problems, the hardest of which are called NP-complete problems, for which no polynomial-time algorithms are known. The most important open question in complexity theory, the P = NP problem, asks whether such algorithms actually exist for NP-complete, and by corollary, all NP problems. It is widely believed that this is not the case. |
|
|
Term
Ch 3 pre-review What is NP-Complete? |
|
Definition
A decision problem C is NP-complete if:
1. C is in NP, and 2. Every problem in NP is reducible to C in polynomial time. |
|
|
Term
ch 2 summary What is the first step in designing an agent? |
|
Definition
A task environment specification includes the performance measure, the external environment, the actuators. and the sensors. In designing an agent, the first step must always be to specify the task environment as fully as possible. |
|
|
Term
ch 2 summary How many basic designs of an agent program are there? |
|
Definition
The agent program implements the agent function_ There exists a variety of basic agent-program designs reflecting the kind of information made explicit and used in the decision process. The designs vary in efficiency, compactness, and flexibility. The appropriate design of the agent program depends on the nature of the environment.
alternative answer at a rougher resolution level-5: • Simple reflex agents; • Model-based reflex agents; • Goal-based agents; and • Utility-based agents. * Learning agents |
|
|
Term
ch 2 What's the difference in how an agent and the (abstract) agent function deal with percepts? |
|
Definition
2.4.1 Agent programs The agent programs that we design in this book all have the same skeleton: they take the current percept as input from the sensors and return an action to the actuators. Notice the difference between the agent program, which takes the current percept as input, and the agent function, which takes the entire percept history. |
|
|
Term
What are the 5 components of a problem definition? |
|
Definition
* the agents initial state *A description of the possible actions available to the agent Given a particular state *A description of what each action does; the formal name for this is the transition model *The goal test *path cost |
|
|
Term
What is the solution to a defined problem? |
|
Definition
A solution to a problem is an action sequence that leads from the initial state to a goal state. Solution quality is measured by the path cost function, and an optimal solution has the lowest path cost among all solutions. |
|
|
Term
|
Definition
The process of removing detail from a representation is called abstraction. |
|
|
Term
|
Definition
The abstraction is valid if we can expand any abstract solution into a solution in the more detailed world; |
|
|
Term
|
Definition
The abstraction is useful if carrying out each of the actions in the solution is easier than the original problem |
|
|
Term
|
Definition
The choice of a good abstraction thus involves removing as much detail as possible while retaining validity and ensuring that the abstract actions are easy to carry out. Were it not for the ability to construct useful abstractions, intelligent agents would be completely swamped by the real world. |
|
|
Term
|
Definition
Together, the initial state, actions, and transition model implicitly define the state space of the problem—the set of all states reachable from the initial state by any sequence of actions. |
|
|
Term
|
Definition
initial states, matched to given actions, matched to resulting states |
|
|
Term
If a problem is NP-Complete, does that mean it's most optimal known algorithm solution is closer to the worst or best possible algorithm solutions for similar problems? |
|
Definition
It will be closer to the worst. |
|
|
Term
Is the sliding block puzzle NP-Complete or NP? |
|
Definition
|
|
Term
|
Definition
adds to a state while searching for a solution |
|
|
Term
complete state formulation |
|
Definition
moves state elements around while searching for a solution |
|
|
Term
|
Definition
The P versus NP problem is a major unsolved problem in computer science. Informally, it asks whether every problem whose solution can be efficiently checked by a computer can also be efficiently solved by a computer |
|
|
Term
|
Definition
NP-hard problems are those at least as hard as NP-complete problems, meaning that all NP-problems can be reduced to them, but NP-hard problems need not be in NP and thus need not have solutions verifiable in polynomial time. |
|
|
Term
|
Definition
find a route on a map to visit each city at least once |
|
|
Term
Traveling sales person problem |
|
Definition
find the Shortest route on a map to visit each city only once. |
|
|
Term
Automated Assembly sequencing |
|
Definition
In assembly problems, the aim is to find an order in which to assemble the parts of some object. |
|
|
Term
|
Definition
Another important assembly problem is protein design, in which the goal is to find a sequence of amino acids that will fold into a PROTEIN DESIGN three-dimensional protein with the right properties to cure some disease. |
|
|
Term
|
Definition
The set of all leaf nodes available for expansion at any given point is called the frontier. |
|
|
Term
|
Definition
a node with no (currently expanded) children on the tree |
|
|
Term
|
Definition
The process of expanding nodes on the frontier continues until either a solution is found or there are no more states to expand. The general TREE-SEARCH algorithm is shown informally in Figure 3.7. Search algorithms all share this basic structure; they vary primarily according to how they choose which state to expand next—the so-called search strategy. |
|
|
Term
General Tree Search pseudo code. |
|
Definition
function TREE-SEARCH( problem) returns a solution, or failure initialize the frontier using the initial state of problem loop do if the frontier is empty then return failure choose a leaf node and remove it from the frontier then return the corresponding solution if the node contains a goal state expand the chosen node, adding the resulting nodes to the frontier p 77 |
|
|
Term
How can algorithms avoid search redundant paths? |
|
Definition
As the saying goes, algorithms that, oget their history are doomed to repeat it. The way to avoid exploring redundant paths is to remember where one has been. To do this, we augment the TREE-SEARCH algorithm with a data structure called the explored set (also aPLORED set known as the closed list), which remembers every expanded node. |
|
|
Term
Difference between nodes and states |
|
Definition
Up to now, we have not been very careful to distinguish between nodes and states, but in writing detailed algorithms it's important to make that distinction. A node is a bookkeeping data structure used to represent the search tree. A state corresponds to a configuration of the world. Thus, nodes are on particular paths, as defined by PARENT pointers, whereas states are not. Furthermore, two different nodes can contain the same world state if that state is generated via two different search paths. |
|
|
Term
|
Definition
* node.STATE: the state in the state space to which the node corresponds; * node.PARENT: the node in the search tree that generated this node; * node.ACTION: the action that was applied to the parent to generate the node; * node.PATH-COST: the cost, traditionally denoted by y(ti), of the path from the initial state to the node, as indicated by the parent pointers. |
|
|
Term
exponential complexity search problems |
|
Definition
exponential-complexity search problems cannot be salved by uninformed methods for any but the smallest instances. |
|
|