cs 236

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

prim's algorithm

(Minimum Spanning Trees, O(m + nlogn), where m is number of edges and n is the number of vertices) Starting from the lowest cost vertex, grow the rest of the tree one edge at a time until all vertices are included (never including cycles). Continue to add the lowest cost edges that are connected to the edges currently added. Finish when we have n-1 edges where n is the number of vertices.

predicate logic equivalances

1. ∀x [P(x) ∧ Q(x)] ≡ ∀x P(x) ∧ ∀x Q(x) 2. ¬∀x P(x) ≡ ∃x¬P(x) Think in these terms: ¬[P(x1)∧P(x2)∧. . .∧P(xn)] ≡ [¬P(x1)∨¬P(x2)∨. . .∨¬P(xn)] In English: "Not every x in P(x) is true" or "There exists an x in Q(x) that is false" 3. ¬∃x P(x) ≡ ∀x ¬P(x) In English: "There does not exist an x such that P(x) is true" or "For all x there is not a P(x) that is true"

Relation Graphs

A graph is made up of vertices (or nodes) and edges (lines that connect two vertices). A graph can be used to represent a relation on a set, since such a relation relates two elements, a and b together. Each element in the set is a vertex, and each tuple in the relation defines a directed edge: (a, b) is an edge from vertex a to vertex b. (looks like a state machine with each element, or vertex, being a state, and the tuple, or edge, is a line to the next state.

Lexer

A program that takes as input a program file and scans the file for tokens. The output to the lexer is a collection of tokens. These tokens can then be read by the parser.

Partial Orders (relations)

A relation R on a set is called a partial ordering or partial order if it is reflexive, antisymmetric, and transitive. (like an equivalence relation but a partial order is antisymmetric instead of symmetric.)

Relational Algebra, operations, and matrix representation

A relation is a set of tuples made up of elements from sets. Specifically, a relation is a subset of the Cartesian product of some number of sets Union: R1 ∪ R2 = all unique tuples in both relations Intersection: R1 ∩ R2 = all tuples contained by both relations difference: R1 − R2 = all tuples contained only in R1 Relation Cross-product: R1 × R2 = essentially the same as the Cartesian product of two sets. Matrix Representation: put all elements of the tuples along the top and side of the matrix, and fill it with 0 where there is no relation and 1 where there is.

Equivalence Relations: Reflexive, Symmetric, Transitive

A relation on a set A is called an equivalence relation if it is reflexive, symmetric, and transitive. Two elements a and b that are related by an equivalence relation are called equivalent. The notation a ∼ b is often used to denote that a and b are equivalent elements with respect to a particular equivalence relation.

Set Theory Set Builder Notation subsets set size and cardinality power set

A set is an unordered collection of objects, called elements or members of a set. A set is said to contain its elements. We write a ∈ A to denote that a is an element of the set A. The notation a /∈ A (slash through epsilon)denotes that a is not an element of the set A. Notation: usually use a capital letter with elements between braces. Set Builder Notation: Set builder notation allows us to describe what elements are in a set without being explicit: A = {x|x satisfies some property}. Z^+ is the set of all positive integers where Z is the set of all integers equivalence: two sets are equal if they contain the exact same elements subsets: A is a subset of B if and only if every element of A is also an element of B. Use notation A ⊆ B to indicate that A is a subset of the set B. a proper subset cannot be equal and is notate as A ⊂ B set size: If there are exactly n distinct elements in S where n is a nonnegative integer, we say that S is a finite set and that n is the cardinality of S. The cardinality of S is denoted by |S|. A set is said to be infinite if it is not finite. power set: Given a set S, the power set of S is the set of all subsets of the set S. The power set of S is denoted by P(S). alternate notation: 2^S

Pushdown Automata

A stack based machine used to recognize context free languages. Start at the start symbol and then use the production rules until there are only terminals in the resulting string. use a format as follows: • String before: S • Production: S → 0S1 • String after: 0S1 • Production: S → λ • String after: 01 S ⇒∗ 01

parse table

A table used in parsing LL(k) grammars. A parse table is a way to write the production rules for a grammar in order to parse strings to see if they are part of the grammar. To build the parse table we create a column for every terminal and the extra symbol #. We create a row for each symbol in the vocabulary (all non-terminals and terminals) and the extra symbol #. Keep the order of the terminal consistent for both rows and columns We can "trace" a given string using a parse table as a key in a separate table with the following columns: Action: write a description of the action to take ex: (stack val, input val) = (stack replacement, output val) Stack: initialize a stack with # and the starting non-terminal Input: the string to parse. keep track of where you are with an arrow. Output: a string with the output from each action (found by looking at the subscript number for each production)

Parse Trees, Grammar Ambiguity and causes (precedence ambiguity)

A tree that identifies the production rules of a grammar, used to derive a valid string. Evaluate from bottom to top. When we can create two different parse trees for the given grammar. We don't want ambiguity in our grammar. types: precedence, left associativity, and right associativity. Precedence defines the order of operations i.e. '1 + 2 * 3'. Left associativity defines when operators should be evaluated from left to right i.e. '6 / 2 * 3'. Right associativity defines when operators should be evaluated from right to left i.e. '2∧3 ∧2' fixing ambiguity: Precedence: add levels in the production to force the proper order of operations further down the tree. left associativity: When we want left associativity we keep the non-terminal on the left, but change the non-terminal on the right to be lower in the parse tree. That means we change our rule A → A ∗ A to A → A ∗ B.

depth first search

Algorithm: Procedure dfs(start: vertex) mark start visit(start) Procedure visit(v: vertex) for each vertex w adjacent to v and not marked mark w visit(w) spanning tree = a subgraph of G that is a tree containing every vertex of G. tree edges = The edges in our original graph that are also in our depth-first search tree Back Edges = The edges in our original graph that are excluded

Backus-Naur (or Normal) Form

Backus-Naur form (BNF) is a standardization for identifying terminals and non-terminals in a grammar. It was created to support development of programming languages. All nonterminals are identified by '< id >', all terminals are simply identified by 'id'. Production rules '::=' instead of '→.'

Conjunctive Normal Form

CNF = (product of sums)

Special Functions

Composition of Functions = Let g be a function from the set A to the set B and let f be a function from the set B to the set C. The composition of the functions f and g, denoted for all a ∈ A by f ◦ g, is defined by (f ◦ g)(a) = f(g(a)). so the codomain of g must be the domain of f Inverse Functions = Let f be a one-to-one correspondence from the set A to the set B. The inverse function of f is the function that assigns to an element b belonging to B the unique element a in A such that f(a) = b. The inverse function of f is denoted by f −1 . Hence, f −1 (b) = a when f(a) = b.

Grammar

Concerned with syntax, or form, rather than semantics, or meaning. G = (V, T, S, P) V = Vocabulary = T + N; T = Terminals; S is in V (S is a symbol, not a set; it is the starting non-terminal); P = {A → a, . . . } (set of production rules: left-hand side produces right-hand side); N = V -T (non-terminals); V = N ∪ T (the Vocabulary is all terminals and non-terminals) terminal = Final. The elements of the vocabulary cannot be replaced by other symbols productions = The rules that specify when we can replace a string from V∗, the set of all strings of elements in the vocabulary, with another string.We denote by z0 → z1 the production that specifies that z0 can be replaced by z1 within a string

Context-Free Grammars vs Context-sensitive Grammars

Context-Free Grammars = a subset of all grammars where all production rules only have a single, non-terminal on the left-hand side Context-sensitive Grammars = Grammars that have more than a single, non-terminal on the left-hand side for any rule

parser

Determines the meaning of the program based on the order and type of tokens. This meaning is then given to the execution engine, which executes the program.

Mealy machines

outputs correspond to transitions between states

Warshall's Algorithm

used to compute the transitive closure of a relation in O(n^ 3 ) time. the idea is that if given a graph, we make vertices connected that are actually only connected "through" other vertices. (draw out the graph when preforming a transitive closure to better understand this). this makes sense for something like cities on a map connected by roads. vertices are cities and you may have to travel through several to reach your desired destination. Warshall's algorithm uses triply nested loops. The outermost loop represents the "mid" vertex that bridges the connection, the next loop represents the "from" vertex, and the innermost loop represents the "to" vertex basically: go through each element in the relation. for each element, everything that points to the current element now also points at everything the current element points at. continue until you've gone through all the elements.

DFS Trees

used to determine if every vector of a graph is connected

Recursive-Descent Parsing

using recursion to create a parser for our LL(k) grammars. three step process for a given LL(1) grammar: 1. Create a function for each non-terminal, 'A' 2. Within the function, check input (current token) against the terminals generated by A for a match (this defines which production to use next) 3. Within the function, call corresponding function for non-terminals generated by A for the production matched (this will push the next function on the stack before returning from the current function - this is the recursive part)

Graphs

G = (V, E) V = a non empty set of verteces (or nodes) E = a set of edges. Each edge has either one or two vertices associated with it, called its endpoints. An edge is said to connect its endpoints. infinite graph = a graph with an infinite number of edges finite graph = a graph with a finite number of edges Simple graph = graphs where the same pair of vertices are only connected by one edge, as opposed to multiple edges multigraph = When multiple edges are present between the same vertices directed graph = consists of a nonempty set of vertices V and a set of directed edges (or arcs) E. Each directed edge is associated with an ordered pair of vertices. The directed edge associated with the ordered pair (u, v) is said to start at u and end at v. simple directed graph = has no loops - no edge (u, u) - nor multiple directed edges: there is only one edge (u, v) for every pair of vertices. directed multigraph = may have multiple directed edges. Each ordered pair edge (u, v) is said to have multiplicity of n, when there are n total (u, v) edges in a directed multigraph Undirected Adjacent elements = Two vertices u and v in an undirected graph G are called adjacent (or neighbors) in G if u and v are endpoints of an edge e of G. Such an edge e is called incident with the vertices u and v and e is said to connect u and v. neighborhood = The set of all neighbors of a vertex v of G = (V, E), denoted by N(v). Directed Adjacent = When (u, v) is an edge of the graph G with directed edges, u is said to be adjacent to v and v is said to be adjacent from u. The vertex u is called the initial vertex of (u, v), and v is called the terminal or end vertex of (u, v). path = a list of vertices connected by edges. The length of the path is the number of vertices in the path cycle = a path of length ≥ 1 that begins and ends at the same vertex. A simple cycle is a cycle that does not repeat itself. Note that a cycle can infinitely repeat itself; it's like an infinite loop. The simple cycle is like the body of an infinite loop. subgraph of a graph = G = (V, E) is a graph H = (W, F), where W ⊆ V and F ⊆ E. A subgraph H of G is a proper subgraph of G if H 6= G.

Finite-State Automation (automata)

M = (S, I, f, s_0, F) a finite set S of states a finite input alphabet "I" a transition function f that assigns to each state and input pair a new state an initial state s_0 a set of final states F that is a subset of S

propositional logic

Propositional logic (also called propositional calculus) is the area of logic that deals with propositions. Proposition = A proposition is a declarative sentence (that is, a sentence that declares a fact) that is either true or false, but not both. (A propositional variable is a letter that represents a proposition (traditionally, we start with p, q, r, s, ...). ie Let p = "The earth is round.") Truth Value = The truth value of a proposition is true, denoted by T, if it is a true proposition, and the truth value of a proposition is false, denoted by F, if it is a false proposition. Compound Propositions = New propositions formed from existing propositions using logical operators are called compound propositions.

Relations properties

Relations on a set can have important properties that we can use for classification. We will see six properties: Reflexive = A relation R on a set A is called reflexive if (a, a) ∈ R for every element a ∈ A. (so it will have all ones down the diagonal of the matrix). Irreflexive = A relation R on a set A is called irreflexive if (a, a) 6∈ R for every element a ∈ A, i.e. ∀a ∈ A[(a, a) 6∈ R]. Irreflexive is the opposite of reflexive (has zeros down the diagonal). Symmetric = A relation R on a set A is called symmetric if (b, a) ∈ R whenever (a, b) ∈ R, for all a, b ∈ A. (must mirror across the diagonal) Asymmetric = does not mirror across the diagonal like symmetric and is irreflexive Antisymmetric = same as assymetric but does not have to be irreflexive Transitive = whenever (a, b) ∈ R and (b, c) ∈ R, then (a, c) ∈ R, for all a, b, c ∈ A. (so basically if something "points" to something, it points to everything that thing points to.)

Tree Traversal

Rooted Tree = A rooted tree is a tree in which one vertex has been designated as the root and every edge is directed away from the root Ordered Rooted Tree = a rooted tree where the children of each vertex are ordered Binary Tree = an ordered rooted tree with only 2 children denoted left and right. The left child is the root of the left subtree, and the right child is the root of the right subtree. Print orders during traversal using depth first: Preorder = lists a node before visiting any of its children. Thus each node is listed as it is visisted. (put print statement before recursing) Postorder = lists the nodes of our tree in the order that we complete the visit (put print statement after recursing) inOrder = prints the nodes in the proper order (put print statement after recursing left but before recursing right)

existential quantification

The existential quantification of P(x) is the proposition "There exists an element x in the domain such that P(x) [is true]." notation ∃x P(x) for the existential quantification of P(x). Here the ∃ is called the existential quantifier. (just or-ed list (or dis-junction) of P(x) values) ∃x P(x) := P(x1) ∨ P(x2) ∨ P(x3) ∨ . . . ∨ P(xn)

universal quantification

The universal quantification of P(x) is the statement "P(x) [is true] for all values x in the domain." 1 The notation ∀x P(x) denotes the universal quantification of P(x). Here ∀ is called the universal quantifier. We read ∀x P(x) as "for all x P of x." An element [value of x] for which P(x) is false is called a counterexample of ∀x P(x). (basically an and-ed list (or conjunction) of P(x) values) ∀x P(x) := P(x1) ∧ P(x2) ∧ P(x3) ∧ . . . ∧ P(xn)

Relational Data Model, Unary Operators, and Binary Operators

Used for building databases. Has a Schema at the top of the table and the rows below that show instances of the relationships (associated with "Rules" in our datalog programs) Unary Operators: Select: rep with σ. selects rows. Project: rep with π. Selects columns. Rename: rep with ρ. ρA←Z renames schema A with Z. Binary Operators: Union: rep by ∪. The union of two relational tables is the same as the union of two relations Intersection: rep by ∩. e intersection of two relational tables is the same as the intersection of two relations Cross Product: rep with ×. Like cartesian product, creates all possible combinations assuming schema are different. If they are the same, perform natural join instead. Natural Join: use symbol |×|. If schema are same is an intersection. If are different is a cross product. If only some are same, is like intersection but adds extra schema. order of operations: Unary before binary from inside out.

Bit string representation in set theory.

We have a universe U which defines the order of all elements in the set. The number of elements in the universe is our number of bits. We compare other sets against this and put a 1 where it contains the element, and a zero where it doesn't

Equivalence Classes

We use equivalence relations to build equivalence classes. Each equivalence class partitions the domain, so these classes are sometimes referred to as the partitions of an equivalence relation Let R be an equivalence relation on a set A. The set of all elements that are related to an element a of A is called the equivalence class of a. This is denoted as [a]R. When there is only one relation in question we can delete the subscript and just write it as [a]. (so it's basically the set of each vector the vector points to)

LL(k) grammars

a subset of grammars that does not require backtracking when creating a parse tree and can be successfully parsed by an algorithm (there is never a case where two non terminals produce the same terminal). Can be parsed with an LL parser which parses the input from left to right and produces a leftmost derivation. An LL(k) parser is able to parse an LL grammar with just k look-ahead characters k is determined by the number of terminals we have to look at before choosing which production to use. (means we can look at the leftmost terminal and build the table from it)

Function

a transformation or mapping of a set of inputs to a set of outputs. domain = the set of inputs. codomain = the set of outputs If f(a) = b, we say that b is the image of a and a is the preimage of b. The range, or image, of f is the set of all images of elements of A. Also, if f is a function from A to B, we say that f maps A to B. domain of definition (preimages)= elements of the domain that are mapped to elements of the codomain partial function = when the domain of definition is a subset of the domain total function = when the domain of definition is equal to the domain

Closures

adding the tuples that make a relation either reflexive, symmetric, or transitive. minimal relation = There is no tuple in the new relation that we can remove such that the relation still contains all original tuples and has the desired property Computing the Reflexive Closure = ∆ (capital delta), is defined as: ∆ = {(a, a) | a ∈ A}. To compute the reflexive closure of a relation R, we must compute R ∪ ∆. Computing the Symetric Closure = The inverse, or transpose, relation of a relation R, R−1 , is defined as: R−1 = {(b, a) | (a, b) ∈ R}. to compute the symmetric closure of a relation 2 R, we must compute R ∪ R−1 . Transitive Closure Algorithm: 1. Add ordered pairs needed for the relation to be transitive 2. Repeat until no new ordered pairs are added

Breadth First Search

algorithm: Start at the root. Then we visit all of the root's connections. Then we visit the root's connections' connections. Each level gets visited before moving on to the next. Thus, we are exploring the breadth of the tree before going any deeper shortest path: BFS creates the shortest path from any vertex to any other vertex in the resulting tree

greedy algorithm

an algorithm that makes an optimal choice at each step

Hasse Diagrams

can only be constructed for a total orders. a relational graph without arrows. elements are arranged in a column with lines linking each element. we know that each element points to itself and to whatever it's pointing at (usually each thing above it)

datalog

fake language for class. mini version of prolog. schemes: facts: rules: queries: check to see if something holds.

regular expressions

first generator used to define a language. regex defined over a set "I" are defined recursively as follows: Ө = is a regex; λ = is a regex (associated with empty set); x is a regex when x ∈ I; (AB), (A U B), A* are regex when A and B are regex.

Logic Operators

five, core operators in propositional logic.: negations: ¬ (not)(it is not the case...) conjunction: ∧ (and) disjunction: ∨ (or) implication: → (T F = F, else = true)("or" with a bubble in one input)(if this then that) bi-implication: ↔ (iff statement: xnor gate) Precedence Operator 1 ¬ 2 ∧ 3 ∨ 4 → 5 ↔

formal vs natural languages

formal languages: precise rules and mathematical definitions. created with a purpose and formula. natural language: something people speak

Logical Equivalence

if p ↔ q is a tautology. The notation p ≡ q denotes that p and q are logically equivalent

Function Properties

one to one/injection = f(a) = f(b) implies that a = b for all a and b in the domain of f onto/surjection = for every element b ∈ B there is an element a ∈ A with f(a) = b. one-to-one correspondence/Bijection = a function that has both injective and surjective properties

important logical equivalencies

p → q = ¬p ∨ q

Poset. and total orders

partially ordered set A set S together with a partial ordering R is called a partially ordered set, or poset (pronounced "poh-set"), and is denoted by (S, R). Members of S are called elements of the poset. minimal elements = an element a such that there is no b, b a, unless a = b (bottom elements that nothing points to in a hasse diagram) maximal element = an element a such that there is no b, a b, unless a = b. (top elements that don't point to anything in a hasse diagram)

recognizers and generator

recognizer = construct that accepts a language. generator = construct that creates strings of a specific language.

Finite state machines

recognizers for languages generated by regex. M = (S, I, O, f, g, s_0) a finite set S of states a finite input alphabet "I" a finite output alphabet O. a transition function f that assigns to each state and input pair a new state. an output function g that assigns to each state and input pair. an output an initial state s_0.

Kruscal's Algorithm

same as Prim's algorithm except the subsequent edges don't have to be adjacent to the currently chosen. continue choosing the lowest un-selected edges that don't complete a circuit until we have n-1 edges.

Floyd's Algorithm

same basic idea and formula as warshall's algorithm, except we focus on the weights on the edges rather than the verteces themselves. the idea is that we can take a problem, like airlines having flights to and from multiple cities, and turn it into a graph. the weights represent the time or distance, or something . floyds algorithm calculates the shortest cost between two points. draw the standard relation matrix but write the weight where tuples exist instead of a 1. write a zero where a thing would point to itself. write an infinity where things don't connect instead of a zero. for each element, use warshall's algorithm, but 1. ignore zeros. 2. ignore where a weight has been calculated. 3. only create a new weight to replace an infinity.

three key schemas

schema = a representation of information and how it is used. 1. recognizers and generators _state machines? 2. logic and relations -boolean algebra 3. graphs and algorithms -graphs look like flowcharts

Set Operations and tuples

six important set operations: Union: A ∪ B = {x | x ∈ A ∨ x ∈ B}. Thus, A ∪ B is a new set that contains all distinct elements contained in A or B (we don't include duplicates). Intersection: A ∩ B = {x | x ∈ A ∧ x ∈ B}. Thus, A ∩ B is a new set that contains all distinct elements contained in both A and B Difference: A − B = {x | x ∈ A ∧ x 6∈ B}. Thus, the new set contains elements that are only contained by A and not by B. Complement: we define first the set U which is called the universal set: it contains all possible elements of the domain. The complement of a set S is denoted as S_bar and is defined as the difference of U and S as follows: S_bar = U − S. Tuple: The ordered n-tuple (a1, a2, . . . , an) is the ordered collection that has a1 as its first element, a2 as its second element, . . ., and an as its nth element. Cartesian product: The Cartesian product of A and B, denoted by A × B, is the set of all ordered pairs [2-tuple] (a, b), where a ∈ A and b ∈ B. Hence, A × B = {(a, b) | a ∈ A ∧ b ∈ B}.

Predicates

sometimes called a propositional function. A predicate is a parameterized proposition, denoted as P(x), where x is constrained to some range of values called the domain. the Predicate maps each value of the domain to a truth value.

language definitions

symbol = an element of a set string = a sequence of symbols (using any combination or quantity of symbols) language = a set of strings (all possible strings, or words, that are part of the given set)(subset of an alphabet*) word = a string of a language vocabulary = a finite, non-empty set of symbols 𝜆 = an empty string ∅ = the empty set.

Tautology, contradiction, and contingency

tautology = needless repetition of an idea by using different but equivalent words; a redundancy but for class: A compound proposition that is always true, no matter what the truth values of the propositional variables that occur in it, is called a tautology. Contradiction = A compound proposition that is always false. contingency = A compound proposition that is neither a tautology nor a contradiction

Moore machines

the output is determined only by the state


Ensembles d'études connexes

Immunizations, Skin, and Communicable Disease

View Set

Lección 8: Recapitulación y Flash cultura

View Set

chapter 24 study questions (astronomy)

View Set

ACCT 2521 Ch 6 & 11 Bonus Point Quiz

View Set

World Geography - United States Unit 3 Exam

View Set

Business Statistics CH 5 Smartbook

View Set

Interpersonal communication - Final

View Set