500 summer

Ace your homework & exams now with Quizwiz!

Sudan Function

Fnaught(x,y)= x+y Fn+1(x,0)=x,n>=0 Fn+1(x,y+1)=Fn(Fn+1(x.y),Fn+1(x.y)+y+1), n>=0 recursive without being primitive recursion

Savitchs Theorem

For any function f : N−→R+, where f(n) ≥ n, NSPACE(f(n)) ⊆ SPACE(f^2(n)).

Time Hierarchy Theorem

For any time constructible function t: N−→N, a language A exists that is decidable in O(t(n)) time but not decidable in time o(t(n)/logt(n)).

We call a nondeterministic Tur- ing machine a ______ if all branches halt on all inputs

decider

Surjective

describes a mapping in which each element of the range is the target of some element of the domain. Also, onto.

Bijective

describes a relation that is both injective and surjective (one-to-one and onto).

Rice's theorem

determining any property of the languages recognized by a TM is undecidable

An endmarked language is generated by a deterministic context-free grammar if and only if it is

deterministic context free.

The languages that are recognizable by deterministic pushdown automata (DPDAs) are called

deterministic context-free languages (DCFLs)

P <-> Q

(P -> Q) AND (Q-> P)

tm recognizable

Call a language Turing-recognizable if some Turing machine recognizes it.1

Chapter 10 problems are 8 13 14 23 (have none)

Chapter 9 problems are 1 2 3 6 7 8 9 10 11 12 15 16 20 22 have 123 7 15

PSPACE=NPSPACE

Due to Savitchs theorem, b/c square of any poly is still a poly SAT is in SPACE(n) and that ALLNFA is in coNSPACE(n) and hence, by Savitch's theorem, in SPACE(n2) becausethedeterministic spacecomplexityclassesare closedundercomplement. Therefore, both languages are in PSPAC

cfl pumping

If A is a context-free language, then there is a number p (the pumping length) where, if s is any string in A of length at least p, then s may be divided into five pieces s = uvxyz satisfying the conditions 1. for each i ≥ 0, uvi xyi z ∈ A, 2. |vy| > 0, and 3. |vxy| ≤ p.

Myhill-Nerode

Let L be a language and let X be a set of strings. Say that X is pairwise distinguishable by L if every two distinct strings in X are distinguishable by L. Define the index of L to be the maximum number of elements in any set that is pairwise distinguishable by L. The index of L may be finite or infinite.

Adfa in L

M= "on imput <B,w> where B is DFA and w is string 1. Sim B on w by keeping track of B's current state and its current head location, updating appropriately 2. If sim ends in accept, accept 3. Else if the sim ends in non-accepting state, reject SPACE required to carry out sim is O(logn), since n items of values storing input M. This we constructed TM M to decided Adfa in logspace-- Adfa in L

NP Time class

NP =Uk NTIME(n^k).

Cook-Levin Theorem

SAT is NP-c

not ATM is not Turing-recognizable.

We know that ATM is Turing-recognizable. If ATM also were Turing- recognizable, ATM would be decidable. Theorem 4.11 tells us that ATM is not decidable, so ATM must not be Turing-recognizable.

pcp proof

We let TM R decide the PCP and construct S deciding ATM. Let M = (Q, Σ, Γ, δ, q0, qaccept, qreject), where Q, Σ, Γ, and δ are the state set, input alphabet, tape alphabet, and transi- tion function of M, respectively. In this case, S constructs an instance of the PCP P that has a match iff M accepts w. To do that, S first constructs an instance P′ of the MPCP. We de- scribe the construction in seven parts, each of which accomplishes a particular aspect of simulating M on w. To explain what we are doing, we interleave the construction with an example of the construction in action. Part 1. The construction begins in the following manner. Put 5# #q0w1w2 ··· wn# 6 into P′ as the first domino 5 t1 b1 6 . Because P′ is an instance of the MPCP, the match must begin with this domino. Thus, the bottom string begins correctly with C1 = q0w1w2 ··· wn, the first configuration in the accepting computation history for M on w, as shown in the following figure. FIGURE 5.16 Beginning of the MPCP match In this depiction of the partial match achieved so far, the bottom string con- sists of #q0w1w2 ··· wn# and the top string consists only of #. To get a match, we need to extend the top string to match the bottom string. We provide additional dominos to allow this extension. The additional dominos cause M's next config- uration to appear at the extension of the bottom string by forcing a single-step simulation of M. In parts 2, 3, and 4, we add to P′ dominos that perform the main part of the simulation. Part 2 handles head motions to the right, part 3 handles head motions to the left, and part 4 handles the tape cells not adjacent to the head. Part 2. For every a, b ∈ Γ and every q, r ∈ Q where q ̸= qreject, if δ(q, a)=(r, b, R), put 5 qa br 6 into P′ . Part 3. For every a, b, c ∈ Γ and every q, r ∈ Q where q ̸= qreject, if δ(q, a)=(r, b,L), put 5 cqa rcb 6 into P′ Part 4. For every a ∈ Γ, put 5a a 6 into P′ . Now we make up a hypothetical example to illustrate what we have built so far. Let Γ = {0, 1, 2, ␣}. Say that w is the string 0100 and that the start state of M is q0. In state q0, upon reading a 0, let's say that the transition function dictates that M enters state q7, writes a 2 on the tape, and moves its head to the right. In other words, δ(q0, 0)=(q7, 2, R). Part 1 places the domino 5# #q00100# 6 = 5 t1 b1 6 in P′ , and the match begins In addition, part 2 places the domino 5 q00 2q7 6 as δ(q0, 0)=(q7, 2, R) and part 4 places the dominos 50 0 6 , 51 1 6 , 52 2 6 , and 5␣ ␣ 6 in P′ , as 0, 1, 2, and ␣ are the members of Γ. Together with part 5, that allows us to extend the match to Thus, the dominos of parts 2, 3, and 4 let us extend the match by adding the second configuration after the first one. We want this process to continue, adding the third configuration, then the fourth, and so on. For it to happen, we need to add one more domino for copying the # symbol. Part 5. Put 5# # 6 and 5 # ␣# 6 into P′ . The first of these dominos allows us to copy the # symbol that marks the sep- aration of the configurations. In addition to that, the second domino allows us to add a blank symbol ␣ at the end of the configuration to simulate the infinitely many blanks to the right that are suppressed when we write the configuration. Continuing with the example, let's say that in state q7, upon reading a 1, M goes to state q5, writes a 0, and moves the head to the right. That is, δ(q7, 1) = (q5, 0, R). Then we have the domino 5 q71 0q5 6 in P′ . So the latest partial match extends to Then, suppose that in state q5, upon reading a 0, M goes to state q9, writes a 2, and moves its head to the left. So δ(q5, 0)=(q9, 2,L). Then we have the dominos 50q50 q902 6 , 51q50 q912 6 , 52q50 q922 6 , and 5␣q50 q9␣2 6 . The first one is relevant because the symbol to the left of the head is a 0. The preceding partial match extends to Note that as we construct a match, we are forced to simulate M on input w. This process continues until M reaches a halting state. If the accept state occurs, we want to let the top of the partial match "catch up" with the bottom so that the match is complete. We can arrange for that to happen by adding additional dominos. Part 6. For every a ∈ Γ, put 5a qaccept qaccept 6 and 5 qaccept a qaccept 6 into P′ . This step has the effect of adding "pseudo-steps" of the Turing machine after it has halted, where the head "eats" adjacent symbols until none are left. Con- tinuing with the example, if the partial match up to the point when the machine halts in the accept state is The dominos we have just added allow the match to continue: Part 7. Finally, we add the domino 5 qaccept## # 6 and complete the match: That concludes the construction of P′ . Recall that P′ is an instance of the MPCP whereby the match simulates the computation of M on w. To finish the proof, we recall that the MPCP differs from the PCP in that the match is required to start with the first domino in the list. If we view P′ as an instance of he PCP instead of the MPCP, it obviously has a match, regardless of whether M accepts w. Can you find it? (Hint: It is very short.) We now show how to convert P′ to P, an instance of the PCP that still sim- ulates M on w. We do so with a somewhat technical trick. The idea is to take the requirement that the match starts with the first domino and build it directly into the problem instance itself so that it becomes enforced automatically. After that, the requirement isn't needed. We introduce some notation to implement this idea. Let u = u1u2 ··· un be any string of length n. Define ⋆u, u⋆, and ⋆u⋆ to be the three strings ⋆u = ∗ u1 ∗ u2 ∗ u3 ∗ ··· ∗ un u⋆ = u1 ∗ u2 ∗ u3 ∗ ··· ∗ un ∗ ⋆u⋆ = ∗ u1 ∗ u2 ∗ u3 ∗ ··· ∗ un ∗ . Here, ⋆u adds the symbol ∗ before every character in u, u⋆ adds one after each character in u, and ⋆u⋆ adds one both before and after each character in u. To convert P′ to P, an instance of the PCP, we do the following. If P′ were the collection 75 t1 b1 6 , 5 t2 b2 6 , 5 t3 b3 6 , ... , 5 tk bk 68 , we let P be the collection 75 ⋆t1 ⋆b1⋆ 6 , 5 ⋆t1 b1⋆ 6 , 5 ⋆t2 b2⋆ 6 , 5 ⋆t3 b3⋆ 6 , ... , 5 ⋆tk bk⋆ 6 , 5 ∗✸ ✸ 68 . Considering P as an instance of the PCP, we see that the only domino that could possibly start a match is the first one, 5 ⋆t1 ⋆b1⋆ 6 , because it is the only one where both the top and the bottom start with the same symbol—namely, ∗. Besides forcing the match to start with the first domino, the presence of the ∗s doesn't affect possible matches because they simply interleave with the original symbols. The original symbols now occur in the even positions of the match. The domino 5 ∗✸ ✸ 6 is there to allow the top to add the extra ∗ at the end of the match.

dpda

a 6-tuple (Q, Σ, Γ, δ, q0, F), where Q, Σ, Γ, and F are all finite sets, and 1. Q is the set of states, 2. Σ is the input alphabet, 3. Γ is the stack alphabet, 4. δ : Q × Σε × Γε−→(Q × Γε) ∪ {∅} is the transition function, 5. q0 ∈ Q is the start state, and 6. F ⊆ Q is the set of accept states. The transition function δ must satisfy the following condition. For every q ∈ Q, a ∈ Σ, and x ∈ Γ, exactly one of the values δ(q, a, x), δ(q, a, ε), δ(q, ε, x), and δ(q, ε, ε) is not ∅.

A deterministic context-free grammar is

a context-free grammar such that every valid string has a forced handle.

Any context-free language is generated by

a context-free grammar in Chomsky normal form.

small o

a function is asymptotically less than another Let f and g be functions f,g: N−→R+. Say that f(n) = o(g(n)) if lim n→∞ f(n) /g(n)= 0. In other words, f(n) = o(g(n)) means that for any real number c > 0, a number n0 exists, where f(n) < cg(n) for all n ≥n0.

Injective

a function that is one-to-one

recursively enumerable language

a language whose sentences can be enumerated by a recursive program, i.e., any language described by a formal grammar. Abbreviated RE.

linear bounded automaton

a restricted type of Turing machine wherein the tape head isn't permitted to move off the portion of the tape containing the input. If the machine tries to move its head off either end of the input, the head stays where it is—in the same way that the head will not move off the left-hand end of an ordinary Turing machine's tape.

uncountable

a set that is not diagonalizable to N. Reals.

well-formed formula

a string of form R(x...xk) is atomic formula, where all the arity of the is the same.

reduction

a way of converting one problem to another problem in such a way that a solution to the second problem can be used to solve the first problem

formula

a well-formed string over alphabet. can be 1. atomic 2. have form phi1 AND phi2 or phi1 OR phi2 or NOT phi1 where phi1 ad phi2 are smaller formulas 3. has form ∃xi [φ1 ] or∀xi [φ1 ], where φ1 is a smaller formula

Σi-alternating Turing machine (i is in N)

an alternating Turing machine that on every input and on every computation branch contains at most i runs of universal or existential steps, starting with existential steps

Πi-alternating Turing machine ( i in N)

an alternating Turing machine that on every input and on every computation branch contains at most i runs of universal or existential steps, starting with universal steps

closed

an operation that when performed on a collection, returns a collection that remains within the collection

universal Turing machine

capable of simulating any other Turing machine from the description of that machine.

equivalence relation

captures the notion of two objects being equal in some feature. A binary relation R is an equivalence relation if R satisfies three conditions: 1. R is reflexive if for every x, xRx; 2. R is symmetric if for every x and y, xRy implies yRx; and 3. R is transitive if for every x, y, and z, xRy and yRz implies xRz.

CLIQUE ={<G,k>|G is an undirected graph with a k-clique} is in NP

certificate: the clique The following is a verifier V for CLIQUE. V = "On inputhhG,ki,ci: 1. Test whether c is a subgraph with k nodes in G. 2. Test whether G contains all edges connecting nodes in c. 3. If both pass, accept; otherwise, reject." If you prefer to think of NP in terms of nondeterministic polynomial time Turing machines, you may prove this theorem by giving one thatdecidesCLIQUE. Observe the similaritybetween the twoproofs. N = "On inputhG,ki, where G is a graph: 1. Nondeterministically select a subset c of k nodes of G. 2. Test whether G contains all edges connecting nodes in c. 3. If yes, accept; otherwise, reject."

type 2

context free languages, recognized by PDA/NPDA A-> B where A in V and B in (E U V)* aAB=>G ayB off A->y in R. L(G) = {w in E*| S *=G w}

Type 1

context sensitive languages, recognized by LBA a,B in (E U V)* and |a| <= |B| y1ay2 =>G y1By2 iff a-> B in R. L(G)= {w in E*| S *=>G w}

Pushdown automata are equivalent in power to

context-free grammars

T^Atm is deciable. therefore Etm is...

decidable relactive to Atm

f(n) ≥n, a TM that uses f(n) space can have at most

f(n)2O(f(n)) different configurations (Lemma 5.8 pg 222?) A TM computation that halts may not repeat a configuration. Therefore, a TM2 that uses space f(n) must run in time f(n)2O(f(n)), so PSPACE⊆EXPTIME =Uk TIME(2^n^k).

countable

if either set is finite or it has the same size as N

distinguishable

if some string z exists whereby exactly one of the strings xz and yz is a member of L

A language is decidable iff it is Turing-recognizable and co-Turing-recognizable.

in other words, a language is decidable exactly when both it and its complement are Turing-recognizable.

an oracle for a language B

is an external device that is capable of reporting whether any string w is a member of B

union reg

make two machines, smash them together, the resulting tuple is ({Q1xQ2}, Σ (or E1 U E2 if diff alphabets), δ1(r1,a),δ2(r2,a)) (for each (r1,r2) ∈ Q and each a ∈Σ), (q1, q2), (F1xQ2) U(Q1x F2))

type 3

regular languages, recognized by NFA/ DFA A-> aB or A-> a where A,B in V and a in E U {e} aAB=>G ayB off A-> y in R. L(G) = {w in E * | S *=>G w}

A language is Turing-recognizable if and only if

some nondet tm recognizes it

A language is context free if and only if

some pushdown automaton recognizes it.

productions

substitution rules

pumping length

such that all longer strings in the language can be "pumped."

t(n) ≥ n, any machine that operates in time t(n) can use at most

t(n) space because a machine can explore at most one new cell at each step of its computation.

mapping reducibility informal

that a computable function exists that converts instances of problem A to instances of problem B. If we have such a conversion function, called a reduction, we can solve A with a solver for B. The reason is that any instance of A can be solved by first using the reduction to convert it to an instance of B and then applying the solver for B.

Existence of a total function that is not primitive recursive -- Sudan and Ackermann's function

that is defined for all possible values of its input. That is, it terminates and returns a value.

ΠiSPACE(f(n))

the class of languages that a Πi-alternating TM can decide in O(f(n)) time, but instead space bound

ΠiTIME(f(n))

the class of languages that a Πi-alternating TM can decide in O(f(n)) time.

ΣiSPACE(f(n))

the class of languages that a Σi-alternating TM can decide in O(f(n)) time, but instead space bound

If A ≤m B and B is decidable,

then A is decidable We let M be the decider for B and f be the reduction from A to B. We describe a decider N for A as follows. N = "On input w: 1. Compute f(w). 2. Run M on input f(w) and output whatever M outputs." Clearly, if w ∈ A, then f(w) ∈ B because f is a reduction from A to B. Thus, M accepts f(w) whenever w ∈ A. Therefore, N works as desired.

If B is NP-complete and B ≤P C for C in NP,

then C is NP-c We already know that C is in NP, so we must show that every A in NP is polynomial time reducible to C. Because B is NP-complete, every language in NP is polynomial time reducible to B, and B in turn is polynomial time reducible to C. Polynomial time reductions compose; that is, if A is polynomial time reducible to B and B is polynomial time reducible to C, then A is polynomial time reducible to C. Hence every language in NP is polynomial time reducible to C.

If B is NP-complete and B ∈P,

then P = NP

pcp

undecidable, arrangement of dominoes with strings such that the strings on top and bottom match

if A reduces to B

we can use the solution to B to solve A

edfa

{⟨A⟩| A is a DFA and L(A) = ∅} decidable T = "On input ⟨A⟩, where A is a DFA: 1. Mark the start state of A. 2. Repeat until no new states get marked: 3. Mark any state that has a transition coming into it from any state that is already marked. 4. If no accept state is marked, accept; otherwise, reject."

ALLCFG

{⟨G⟩| G is a CFG and L(G) = Σ∗} undecidable This proof is by contradiction. To get the contradiction, we assume that ALLCFG is decidable and use this assumption to show that ATM is decidable. This proof is similar to that of Theorem 5.10 but with a small extra twist: It is a reduction from ATM via computation histories, but we modify the representation of the computation histories slightly for a technical reason that we will explain later. We now describe how to use a decision procedure for ALLCFG to decide ATM. For a TM M and an input w, we construct a CFG G that generates all strings if and only if M does not accept w. So if M does accept w, G does not generate some particular string. This string is—guess what—the accepting computation history for M on w. That is, G is designed to generate all strings that are not accepting computation histories for M on w. To make the CFG G generate all strings that fail to be an accepting computa- tion history for M on w, we utilize the following strategy. A string may fail to be an accepting computation history for several reasons. An accepting computation history for M on w appears as #C1#C2# ··· #Cl#, where Ci is the configuration of M on the ith step of the computation on w. Then, G generates all strings 1. that do not start with C1, 2. that do not end with an accepting configuration, or 3. in which some Ci does not properly yield Ci+1 under the rules of M. If M does not accept w, no accepting computation history exists, so all strings fail in one way or another. Therefore, G would generate all strings, as desired. Now we get down to the actual construction of G. Instead of constructing G, we construct a PDA D. We know that we can use the construction given in Theorem 2.20 (page 117) to convert D to a CFG. We do so because, for our purposes, designing a PDA is easier than designing a CFG. In this instance, D will start by nondeterministically branching to guess which of the preceding three conditions to check. One branch checks on whether the beginning of the input string is C1 and accepts if it isn't. Another branch checks on whether the input string ends with a configuration containing the accept state, qaccept, and accepts if it isn't. The third branch is supposed to accept if some Ci does not properly yield Ci+1. It works by scanning the input until it nondeterministically decides that it has come to Ci. Next, it pushes Ci onto the stack until it comes to the end as marked by the # symbol. Then D pops the stack to compare with Ci+1. They are supposed to match except around the head position, where the difference is dictated by the transition function of M. Finally, D accepts if it discovers a mismatch or an improper update. The problem with this idea is that when D pops Ci off the stack, it is in reverse order and not suitable for comparison with Ci+1. At this point, the twist in the proof appears: We write the accepting computation history differently. Every other configuration appears in reverse order. The odd positions remain written in the forward order, but the even positions are written backward.

ALBA

{⟨M,w⟩| M is an LBA that accepts string w}. decidable L = "On input ⟨M,w⟩, where M is an LBA and w is a string: 1. Simulate M on w for qngn steps or until it halts. 2. If M has halted, accept if it has accepted and reject if it has rejected. If it has not halted, reject." If M on w has not halted within qngn steps, it must be repeating a configura- tion according to Lemma 5.8 and therefore looping. That is why our algorithm rejects in this instance.

nondet tm transition function

δ : Q × Γ−→P(Q × Γ × {L, R})

Boolean

∧, ∨, and ¬

NP time class

⊆ EXPTIME =U kTIME(2^n^k)

SAT is NP-c proof

First, we show that SAT is in NP. A nondeterministic polynomial time machine can guess an assignment to a given formula φ and accept if the assignment satisfies φ. Next, we take any language A in NP and show that A is polynomial time reducible to SAT. Let N be a nondeterministic Turing machine that decides A in nk time for some constant k. (For convenience, we actually assume that N runs in time nk − 3; but only those readers interested in details should worry about this minor point.) The following notion helps to describe the reduction. A tableau for N on w is an nk×nk table whose rows are the configurationsof a branch of the computation of N on input w, as shown in the following figure (construction in phone) For convenience later, we assume that each configurationstarts and ends with a # symbol. Therefore, the first and last columns of a tableau are all #s. The first row of the tableau is the starting configuration of N on w, and each row follows the previous one according to N's transition function. A tableau is accepting if any row of the tableau is an accepting configuration. Every accepting tableau for N on w corresponds to an accepting computation branch of N on w. Thus, the problem of determining whether N accepts w is equivalent to the problem of determining whether an accepting tableau for N on w exists. Now we get to the description of the polynomial time reduction f from A to SAT. On input w, the reduction produces a formula φ. We begin by describing the variables of φ. Say that Q and Γ are the state set and tape alphabet of N, respectively. Let C = Q∪Γ∪{#}. For each i and j between 1 and nk and for each s in C, we have a variable, xi,j,s. Each of the (nk)2 entries of a tableau is called a cell. The cell in row i and column j is called cell[i,j] and contains a symbol from C. We represent the contents of the cells with the variables of φ. If xi,j,s takes on the value 1, it means that cell[i,j] contains an s. Now we design φ so that a satisfying assignment to the variables does correspond to an accepting tableau for N on w. The formula φ is the AND of four parts: φcell ∧φstart ∧φmove ∧φaccept. We describe each part in turn. As we mentioned previously, turning variable xi,j,s on corresponds to placing symbol s in cell[i,j]. The first thing we must guarantee in order to obtain a correspondence between an assignment and a tableau is that the assignment turns on exactly one variable for each cell. Formula φcell ensures this requirement by expressing it in terms of Boolean operations: φcell = ^ [(V x_i,j,s) AND(^ (NOTx_i,js, AND NOTx_i,j,t))] 1≤i,j≤nk s in C s,t inC s not equal t φcell is actually a large expression that contains a fragment for each cell in the tableau because i and j range from 1 to nk. The first part of each fragment says that at least one variable is turned on in the corresponding cell. The second part of each fragment says that no more than one variable is turned on (literally, it says that in each pair of variables, at least one is turned off) in the corresponding cell. These fragments are connected by ∧operations. The first part of φcell inside the brackets stipulates that at least one variable that is associated with each cell is on, whereas the second part stipulates that no more than one variable is on for each cell. Any assignment to the variables that satisfies φ (and therefore φcell) must have exactly one variable on for every cell. Thus, any satisfying assignment specifies one symbol in each cell of the table. Parts φstart, φmove, and φaccept ensure that these symbols actually correspond to an accepting tableau as follows. Formula φstart ensures that the first row of the table is the starting configuration of N on w by explicitly stipulating that the corresponding variables are on: φstart = x1,1,# ∧x1,2,q0∧ x1,3,w1 ∧x1,4,w2 ∧...∧x1,n+2,wn∧ x1,n+3, ∧...∧x1,nk−1, ∧x1,nk,# . Formula φaccept guarantees that an accepting configuration occurs in the tableau. It ensures that qaccept, the symbol for the accept state, appears in one of the cells of the tableau by stipulating that one of the corresponding variables is on: φaccept = _ 1≤i,j≤nk xi,j,qaccept. Finally, formula φmove guarantees that each row of the tableau corresponds to a configuration that legally follows the preceding row's configuration according to N's rules. It does so by ensuring that each 2 × 3 window of cells is legal. We say that a 2×3 window is legal if that window does not violate the actions specified by N's transition function. In other words, a window is legal if it might appear when one configuration correctly follows another.3 For example, say that a, b, and c are members of the tape alphabet, and q1 and q2 are states of N. Assume that when in state q1 with the head reading an a, N writes a b, stays in state q1, and moves right; and that when in state q1 with the head reading a b, N nondeterministically either 1. writes a c, enters q2, and moves to the left, or 2. writes an a, enters q2, and moves to the right. Expressed formally, δ(q1,a) = {(q1,b,R)} and δ(q1,b) = {(q2,c,L),(q2,a,R)}. Examples of legal windows for this machine are shown in Figure 7.39. If the top row of the tableau is the start configuration and every window in the tableau is legal, each row of the tableau is a configuration that legally follows the preceding one. We estimate the size of each of the parts of φ. Formula φcell contains a fixedsize fragment of the formula for each cell of the tableau, so its size is O(n2k). Formula φstart has a fragment for each cell in the top row, so its size is O(nk). Formulas φmove and φaccept each contain a fixed-size fragment of the formula for each cell of the tableau, so their size is O(n2k). Thus, φ's total size is O(n2k). That bound is sufficient for our purposes because it shows that the size of φ is polynomial in n. If it were more than polynomial, the reduction wouldn't have any chance of generating it in polynomial time. (Actually, our estimates are low by a factor of O(logn) because each variable has indices that can range up to nk and so may require O(logn) symbols to write into the formula, but this additional factor doesn't change the polynomiality of the result.) Tosee thatwe cangenerate the formulain polynomialtime, observe itshighly repetitive nature. Each component of the formula is composed of many nearly identical fragments, which differ only at the indices in a simple way. Therefore, we may easily constructa reductionthat produces φ in polynomial time from the input w.

P ∧(Q∨R)

(P ∧Q)∨(P ∧R)

P ∨(Q∧R)

(P ∨Q)∧(P ∨R).

9 15 Define pad as in Problem 9.13. a. Prove that for every A and natural number k, A ∈ P iff pad(A,nk) ∈ P. b. Prove that P 6= SPACE(n).

(a) Let A be any language and k ∈ N. If A ∈ P, then pad(A,nk) ∈ P because you candeterminewhetherw ∈ pad(A,nk)by writingw ass#l wheresdoesn'tcontain the # symbol, then testing whether |w| = |s|k; and finally testing whether s ∈ A. Implementing the first test in polynomial time is straightforward. The second test runs in time poly(|s|), and because |s| ≤ |w|, the test runs in time poly(|w|) and hence is in polynomial time. If pad(A,nk) ∈ P, then A ∈ P because you can determine whether w ∈ A by padding w with # symbols until it has length |w|k and then testing whether the result is in pad(A,nk). Both of these actions require only polynomial time. (b) Assume that P = SPACE(n). Let A be a language in SPACE(n2) but not in SPACE(n) as shown to exist in the space hierarchy theorem. The language pad(A,n2) ∈ SPACE(n) because you have enough space to run the O(n2) space algorithm for A, using space that is linear in the padded language. Because of the assumption, pad(A,n2) ∈ P, hence A ∈ P by part (a), and hence A ∈ SPACE(n), due to the assumption once again. But that is a contradiction.

9.7 Giveregular expressionswithexponentiationthatgeneratethefollowinglanguages over the alphabet {0,1}. Aa. All strings of length 500 Ab. All strings of length 500 or less Ac. All strings of length 500 or more Ad. All strings of length different than 500 e. All strings that contain exactly 500 1s f. All strings that contain at least 500 1s g. All strings that contain at most 500 1s h. All strings of length 500 or more that contain a 0 in the 500th position i. All strings that contain two 0s that have at least 500 symbols between them

(a) Σ^500; (b) (Σ ∪ ε)^500; (c) Σ^(500)Σ∗; (d) (Σ ∪ ε)^499 ∪ Σ^(501)Σ∗. ^

A language B is in NP-C is it satisfies two conditions

1. B is in NP and 2. every A in NP is polynomial time reduc to B

regular expression

1. a for some a in the alphabet Σ, 2. ε, 3. ∅, 4. (R1 ∪ R2), where R1 and R2 are regular expressions, 5. (R1 ◦ R2), where R1 and R2 are regular expressions, or 6. (R∗ 1), where R1 is a regular expression.

verifier for a language A is an algorithm V , where

A ={w|V acceptshw,cifor some string c}. We measure the time of a verifier only in terms of the length of w, so a polynomial time verifier runs in polynomial time in the length of w. A language A is polynomially verifiable if it has a polynomial time verifier.

SAT

A Boolean formula is satisfiable if some assignment of 0s and1stothevariables makestheformulaevaluate to1. The precedingformulais satisfiablebecause the assignment x = 0, y = 1, and z = 0 makes φ evaluate to 1. We say the assignment satisfies φ. The satisfiability problem is to test whether a Boolean formula is satisfiable

push down automata

6-tuple (Q, Σ, Γ, δ, q0, F), where Q, Σ, Γ, and F are all finite sets, and 1. Q is the set of states, 2. Σ is the input alphabet, 3. Γ is the stack alphabet, 4. δ : Q × Σε × Γε−→P(Q × Γε) is the transition function, 5. q0 ∈ Q is the start state, and 6. F ⊆ Q is the set of accept states.

tm

7-tuple, (Q, Σ, Γ, δ, q0, qaccept, qreject), where Q, Σ, Γ are all finite sets and 1. Q is the set of states, 2. Σ is the input alphabet not containing the blank symbol ␣, 3. Γ is the tape alphabet, where ␣ ∈ Γ and Σ ⊆ Γ, 4. δ : Q × Γ−→Q × Γ × {L, R} is the transition function, 5. q0 ∈ Q is the start state, 6. qaccept ∈ Q is the accept state, and 7. qreject ∈ Q is the reject state, where qreject ̸= qaccept.

computable function

A function f : Σ∗→Σ∗ if some Turing machine M, on every input w, halts with just f(w) on its tape.

polynomial time computable function

A function f : Σ∗→Σ∗ is a _______ if some polynomial time Turing machine M exists that halts with just f(w) on its tape, when started on any input w.

if A is reducible to B and B is decidable

A is also decidable

If A ≤T B and B is decidable, then

A is decidable

Language A is turing reducible to language B, written A≤T B if

A is decidable relative to B

A PSPACE-hard language is also NP-hard

A language B is PSPACE-hard if every A in PSPACE is polynomial time redux to B, but B does not exist in PSPACE. (if it did, it would be PSPACE-complete) An NP-hard language B similarly does not exist in NP, but every language A in NP polytime reduxs to B. The space taken to solve NP time problems is NPSPACE, or NP subs NPSPACE The space taken to solve P time problems is PSPACE, P subs PSPACE PSPACE subsets NPSPACE NPSPACE subsets PSPACE There exists a bijection, PSPACE AND NPSPACE are equivalent(??) NP subs PSPACE If a language is not a member of PSPACE, it cannot be a member of it's subset. Thus a PSPACE-hard language cannot be a member of NP If every language A in PSPACE reduxs to a language, because NP is subset, every language B in NP also reduxs to that language. Thus a PSPACE-hard language is NP-hard

regular language

A language if some finite automaton recognizes it. also if and only if some nondeterministic finite automaton rec- ognizes it.

ambiguous

A string w is derived ambiguously in context-free grammar G if it has two or more different leftmost derivations.

Chomsky normal form

A → BC A → a where a is any terminal and A, B, and C are any variables—except that B and C may not be the start variable. In addition, we permit the rule S → ε, where S is the start variable.

Time complexity class of A = {0k1k|k ≥ 0}

A ∈TIME(n2) because M1 decides A in time O(n2) and TIME(n2) contains all languages that can be decided in O(n2) time

Let P be any nontrivial property of the language of a Turing machine. Prove that the problem of determining whether a given Turing machine's language has property P is undecidable. In more formal terms, let P be a language consisting of Turing machine descrip- tions where P fulfills two conditions. First, P is nontrivial—it contains some, but not all, TM descriptions. Second, P is a property of the TM's language—whenever L(M1) = L(M2), we have ⟨M1⟩ ∈ P iff ⟨M2⟩ ∈ P. Here, M1 and M2 are any TMs. Prove that P is an undecidable language.

Assume for the sake of contradiction that P is a decidable language satisfying the properties and let RP be a TM that decides P. We show how to decide ATM using RP by constructing TM S. First, let T∅ be a TM that always rejects, so L(T∅) = ∅. You may assume that ⟨T∅⟩ ̸∈ P without loss of generality because you could pro- ceed with P instead of P if ⟨T∅⟩ ∈ P. Because P is not trivial, there exists a TM T with ⟨T ⟩ ∈ P. Design S to decide ATM using RP 's ability to distinguish between T∅ and T . S = "On input ⟨M,w⟩: 1. Use M and w to construct the following TM Mw. Mw = "On input x: 1. Simulate M on w. If it halts and rejects, reject. If it accepts, proceed to stage 2. 2. Simulate T on x. If it accepts, accept." 2. Use TM RP to determine whether ⟨Mw⟩ ∈ P. If YES, accept. If NO, reject." TM Mw simulates T if M accepts w. Hence L(Mw) equals L(T ) if M accepts w and ∅ otherwise. Therefore, ⟨Mw⟩ ∈ P iff M accepts w.

MINtm is not turing-recognizable

Assume that some TM E enumerates MIN TM and obtain a contradiction. We construct the following TM C. C = "On input w: 1. Obtain, via the recursion theorem, own descriptionhCi. 2. Run the enumerator E until a machine D appears with a longer description than that of C. 3. Simulate D on input w." Because MIN TM is infinite, E's list must contain a TM with a longer description than C's description. Therefore, step 2 of C eventually terminates with some TM D that is longer than C. Then C simulates D and so is equivalent to it. Because C is shorter than D and is equivalent to it, D cannot be minimal. But D appears on the list that E produces. Thus, we have a contradiction.

Diagonalization

Assume that we have sets A and B and a function f from A to B. Say that f is one-to-one if it never maps two different elements to the same place—that is, if f(a) ̸= f(b) whenever a ̸= b. Say that f is onto if it hits every element of B—that is, if for every b ∈ B there is an a ∈ A such that f(a) = b. Say that A and B are the same size if there is a one-to-one, onto function f : A−→B. A function that is both one-to-one and onto is called a correspondence. In a correspondence, every element of A maps to a unique element of B and each element of B has a unique element of A mapping to it. A correspondence is simply a way of pairing the elements of A with the elements of B.

If A ≤m B and A is not Turing-recognizable, then

B not Turing recog In a typical application of this corollary, we let A be ATM, the complement of ATM. We know that ATM is not Turing-recognizable from Corollary 4.23. The definition of mapping reducibility implies that A ≤m B means the same as A ≤m B. To prove that B isn't recognizable, we may show that ATM ≤m B. We can also use mapping reducibility to show that certain problems are neither Turing-recognizable nor co-Turing-recognizable, as in the following theorem

NP-c

Certain problems in NP whose individual complexity is related to that of the entire class. If a polynomial time algorithm exists for any of these problems, all problems in NP would be polynomial time solvable

An accepting computation history for M on w is a sequence of configurations, C1, C2,...,Cl, where C1 is the start configuration of M on w

Cl is an accepting configuration of M, and each Ci legally follows from Ci−1 according to the rules of M.

rejecting computation his- tory for M on w is defined

Cl is an rejecting configuration of M, and each Ci legally follows from Ci−1 according to the rules of M.

ETM

ETM = {⟨M⟩| M is a TM and L(M) = ∅} undecidable Let's write the modified machine described in the proof idea using our standard notation. We call it M1. M1 = "On input x: 1. If x ̸= w, reject. 2. If x = w, run M on input w and accept if M does." This machine has the string w as part of its description. It conducts the test of whether x = w in the obvious way, by scanning the input and comparing it character by character with w to determine whether they are the same. Putting all this together, we assume that TM R decides ETM and construct TM S that decides ATM as follows. S = "On input ⟨M,w⟩, an encoding of a TM M and a string w: 1. Use the description of M and w to construct the TM M1 just described. 2. Run R on input ⟨M1⟩. 3. If R accepts, reject; if R rejects, accept." Note that S must actually be able to compute a description of M1 from a description of M and w. It is able to do so because it only needs to add extra states to M that perform the x = w test. If R were a decider for ETM, S would be a decider for ATM. A decider for ATM cannot exist, so we know that ETM must be undecidable.

FORMULA-GAME = {hφi|Player E has a winning strategy in the formula game associated with φ}.

FORMULA-GAME is PSPACE-complete The formula φ = ∃x1∀x2∃x3 ··· [ψ] is TRUE when some setting for x1 exists such that for any setting of x2, a setting of x3 exists such that, and so on ..., where ψ is TRUE under the settings of the variables. Similarly, Player E has a winning strategy in the game associated with φ when Player E can make some assignment to x1 such that for any setting of x2, Player E can make an assignment to x3 such that, and so on ..., where ψ is TRUE under these settings of the variables. The same reasoning applies when the formula doesn't alternate between existential and universal quantifiers. If φ has the form∀x1,x2,x3∃x4,x5∀x6 [ψ], Player A would make the first three moves in the formula game to assign values to x1, x2, and x3; then Player E would make two moves to assign x4 and x5; and finally Player A would assign a value x6. Hence φ ∈ TQBF exactly when φ ∈ FORMULA-GAME, and the theorem follows from Theorem 8.9.

TQBF is PSPACE-complete

First, we give a polynomial space algorithm deciding TQBF. T = "On inputhφi, a fully quantified Boolean formula: 1. If φ contains no quantifiers, then it is an expression with only constants,soevaluate φ and accept if itistrue; otherwise, reject. 2. If φ equals∃x ψ, recursively call T on ψ, first with 0 substituted for x and then with 1 substitutedfor x. If either result is accept, then accept; otherwise, reject. 3. If φ equals∀x ψ, recursively call T on ψ, first with 0 substituted for x and then with 1 substituted for x. If both results are accept, then accept; otherwise, reject." Algorithm T obviously decides TQBF. To analyze its space complexity, we observe that the depth of the recursion is at most the number of variables. At each level we need only store the value of one variable, so the total space used is O(m), where m is the number of variables that appear in φ. Therefore, T runs in linear space. Next, we show that TQBF is PSPACE-hard. Let A be a language decided by a TM M in space nk for some constant k. We give a polynomial time reduction from A to TQBF. The reduction maps a string w to a quantified Boolean formula φ that is true iff M accepts w. To show how to construct φ, we solve a more general problem. Using two collections of variables denoted c1 and c2 representing two configurations and a number t > 0, we construct a formula φc1,c2,t. If we assign c1 and c2 to actual configurations, the formula is true iff M can go from c1 to c2 in at most t steps. Then we can let φ be the formula φcstart,caccept,h, where h = 2df(n) for a constant d, chosen so that M has no more than 2df(n) possible configurations on an input of length n. Here, let f(n) = nk. For convenience, we assume that t is a power of 2. The formula encodes the contents of configuration cells as in the proof of the Cook-Levin theorem. Each cell has several variables associated with it, one for each tape symbol and state, corresponding to the possible settings of that cell. Each configuration has nk cells and so is encoded by O(nk) variables. If t = 1, we can easily construct φc1,c2,t. We design the formula to say that either c1 equals c2, or c2 follows from c1 in a single step of M. We express the equality by writing a Boolean expression saying that each of the variables representing c1 contains the same Boolean value as the corresponding variable representing c2. We express the second possibility by using the technique presented in the proof of the Cook-Levin theorem. That is, we can express that c1 yields c2 in a single step of M by writing Boolean expressions stating that the contents of each triple of c1's cells correctly yields the contents of the corresponding triple of c2's cells. If t > 1, we construct φc1,c2,t recursively. As a warm-up, let's try one idea that doesn't quite work and then fix it. Let φc1,c2,t =∃m1[φc1,m1, t/2 ∧φm1,c2, t /2] The symbol m1 represents a configuration of M. Writing∃m1 is shorthand for ∃x1,...,xl, where l = O(nk) and x1,...,xl are the variables that encode m1. So this constructionof φc1,c2,t says that M can go from c1 to c2 in at most t steps if some intermediate configuration m1 exists, whereby M can go from c1 to m1 in at most t 2 steps and then from m1 to c2 in at most t 2 steps. Then we construct the two formulas φc1,m1, t 2 and φm1,c2, t 2 recursively. The formula φc1,c2,t has the correct value; that is, it is TRUE whenever M can go from c1 to c2 within t steps. However, it is too big. Every level of the recursion involved in the construction cuts t in half but roughly doubles the size of the formula. Hence we end up with a formula of size roughly t. Initially t = 2df(n), so this method gives an exponentially large formula. To reduce the size of the formula, we use the∀quantifier in addition to the∃ quantifier. Let φc1,c2,t =∃m1∀(c3,c4)∈{(c1,m1),(m1,c2)}[φc3,c4, t/2]. The introduction of the new variables representing the configurations c3 and c4 allows us to "fold" the two recursive subformulas into a single subformula, while preserving the original meaning. By writing ∀(c3,c4) ∈ {(c1,m1),(m1,c2)}, we indicate that the variables representing the configurations c3 and c4 may take the values of the variables of c1 and m1 or of m1 and c2, respectively, and that the resulting formula φc3,c4, t 2 is true in either case. We may replace the construct ∀x ∈ {y,z} [...] with the equivalent construct ∀x [(x = y ∨x = z) → ...] to obtain a syntactically correct quantified Boolean formula. Recall that in Section 0.2, we showed that Boolean implication (→) and Boolean equality (=) can be expressed in terms of AND and NOT. Here, for clarity, we use the symbol = for Boolean equality instead of the equivalent symbol↔used in Section 0.2. To calculate the size of the formula φcstart,caccept,h, where h = 2df(n), we note that each level of the recursion adds a portion of the formula that is linear in the size of the configurations and is thus of size O(f(n)). The number of levels of the recursion is log(2df(n)), or O(f(n)). Hence the size of the resulting formula is O(f2(n)).

A language is in NP iff it is decided by some nondeterministic polynomial time Turing machine.

For the forward direction of this theorem, let A ∈ NP and show that A is decided bya polynomialtime NTM N. Let V be the polynomial time verifier for A that exists by the definition of NP. Assume that V is a TM that runs in time nk and construct N as follows. N = "On input w of length n: 1. Nondeterministically select string c of length at most nk. 2. Run V on inputhw,ci. 3. If V accepts, accept; otherwise, reject." To prove the other direction of the theorem, assume that A is decided by a polynomial time NTM N and construct a polynomial time verifier V as follows. V = "On inputhw,ci, where w and c are strings: 1. Simulate N on input w, treating each symbol of c as a description of the nondeterministic choice to make at each step (as in the proof of Theorem 3.16). 2. If this branch of N's computation accepts, accept; otherwise, reject."

MIN TM = {<M>|M is a minimal TM}

If M is a Turing machine, then we say that the length of the description <M> of M is the number of symbols in the string describing M. Say that M is minimal if there is no Turing machine equivalent to M that has a shorter description

undecidability of ATM to prove the undecidability of the halting problem by reducing ATM to HALT TM

HALT TM is undecidable. This proof is by contradiction. We assume that HALT TM is decidable and use that assumption to show that ATM is decidable, contradicting Theorem 4.11. The key idea is to show that ATM is reducible to HALT TM. Let's assume that we have a TM R that decides HALT TM. Then we use R to construct S, a TM that decides ATM. To get a feel for the way to construct S, pretend that you are S. Your task is to decide ATM. You are given an input of the form ⟨M,w⟩. You must output accept if M accepts w, and you must output reject if M loops or rejects on w. Try simulating M on w. If it accepts or rejects, do the same. But you may not be able to determine whether M is looping, and in that case your simulation will not terminate. That's bad because you are a decider and thus never permitted to loop. So this idea by itself does not work. Instead, use the assumption that you have TM R that decides HALT TM. With R, you can test whether M halts on w. If R indicates that M doesn't halt on w, reject because ⟨M,w⟩ isn't in ATM. However, if R indicates that M does halt on w, you can do the simulation without any danger of looping. Thus, if TM R exists, we can decide ATM, but we know that ATM is unde- cidable. By virtue of this contradiction, we can conclude that R does not exist. Therefore, HALT TM is undecidable. PROOF Let's assume for the purpose of obtaining a contradiction that TM R decides HALT TM. We construct TM S to decide ATM, with S operating as follows. S = "On input ⟨M,w⟩, an encoding of a TM M and a string w: 1. Run TM R on input ⟨M,w⟩. 2. If R rejects, reject. 3. If R accepts, simulate M on w until it halts. 4. If M has accepted, accept; if M has rejected, reject." Clearly, if R decides HALT TM, then S decides ATM. Because ATM is unde- cidable, HALT TM also must be undecidable.

VERTEX-COVER is NP-complete

Here are the details of a reduction from 3SAT to VERTEX-COVER that operates in polynomial time. The reduction maps a Boolean formula φ to a graph G and a value k. For each variable x in φ, we produce an edge connecting two nodes. We label the two nodes in this gadget x and x. Setting x to be TRUE corresponds to selecting the node labeled x for the vertex cover, whereas FALSE corresponds to the node labeled x. The gadgets for the clauses are a bit more complex. Each clause gadget is a triple of nodes that are labeled with the three literals of the clause. These three nodes are connected to each other and to the nodes in the variable gadgets that have the identical labels. Thus, the total number of nodes that appear in G is 2m + 3l, where φ has m variables and l clauses. Let k be m + 2l. For example, if φ = (x1 ∨x1 ∨x2) ∧ (x1 ∨x2 ∨x2) ∧ (x1 ∨x2 ∨x2), the reduction produceshG,ki from φ, where k = 8 and G takes the form shown in the following figure (construction in phone) To prove that this reduction works, we need to show that φ is satisfiable if and only if G has a vertex cover with k nodes. We start with a satisfying assignment. We first put the nodes of the variable gadgets that correspond to the true literals in the assignment into the vertex cover. Then, we select one true literal in every clause and put the remaining two nodes from every clause gadget into the vertex cover. Now we have a total of k nodes. They cover all edges because every variable gadget edge is clearly covered, all three edges within every clause gadget are covered, and all edges between variable and clause gadgets are covered. Hence G has a vertex cover with k nodes. Second, if G has a vertex cover with k nodes, we show that φ is satisfiable by constructing the satisfying assignment. The vertex cover must contain one node in each variable gadget and two in every clause gadget in order to cover the edges of the variable gadgets and the three edges within the clause gadgets. That accounts for all the nodes, so none are left over. We take the nodes of the variable gadgets that are in the vertex cover and assign TRUE to the corresponding literals. That assignment satisfies φ because each of the three edges connecting the variable gadgets with each clause gadget is covered, and only two nodes of the clause gadget are in the vertex cover. Therefore, one of the edges must be covered by a node from a variable gadget and so that assignment satisfies the corresponding clause.

reg pumping lemma

If A is a regular language, then there is a number p (the pumping length) where if s is any string in A of length at least p, then s may be divided into three pieces, s = xyz, satisfying the following conditions: 1. for each i ≥ 0, xyi z ∈ A, 2. |y| > 0, and 3. |xy| ≤ p.

Every nondeterministic finite automaton has an equivalent deterministic finite automaton.

If a language is recognized by an NFA, then we must show the existence of a DFA that also recognizes it. The idea is to convert the NFA into an equivalent DFA that simulates the NFA. Recall the "reader as automaton" strategy for designing finite automata. How would you simulate the NFA if you were pretending to be a DFA? What do you need to keep track of as the input string is processed? In the examples of NFAs, you kept track of the various branches of the computation by placing a finger on each state that could be active at given points in the input. You updated the simulation by moving, adding, and removing fingers according to the way the NFA operates. All you needed to keep track of was the set of states having fingers on them. If k is the number of states of the NFA, it has 2k subsets of states. Each subset corresponds to one of the possibilities that the DFA must remember, so the DFA simulating the NFA will have 2k states. Now we need to figure out which will be the start state and accept states of the DFA, and what will be its transition function. We can discuss this more easily after setting up some formal notation. PROOF Let N = (Q, Σ, δ, q0, F) be the NFA recognizing some language A. We construct a DFA M = (Q′ , Σ, δ′ , q0 ′ , F′ ) recognizing A. Before doing the full construction, let's first consider the easier case wherein N has no ε arrows. Later we take the ε arrows into account. 1. Q′ = P(Q). Every state of M is a set of states of N. Recall that P(Q) is the set of subsets of Q. 2. For R ∈ Q′ and a ∈ Σ, let δ′ (R, a) = {q ∈ Q| q ∈ δ(r, a) for some r ∈ R}. If R is a state of M, it is also a set of states of N. When M reads a symbol a in state R, it shows where a takes each state in R. Because each state may go to a set of states, we take the union of all these sets. Another way to write this expression is δ′ (R, a) = - r∈R δ(r, a). 4 3. q0 ′ = {q0}. M starts in the state corresponding to the collection containing just the start state of N. 4. F′ = {R ∈ Q′ | R contains an accept state of N}. The machine M accepts if one of the possible states that N could be in at this point is an accept state. Now we need to consider the ε arrows. To do so, we set up an extra bit of notation. For any state R of M, we define E(R) to be the collection of states that can be reached from members of R by going only along ε arrows, including the members of R themselves. Formally, for R ⊆ Q let E(R) = {q| q can be reached from R by traveling along 0 or more ε arrows}. Then we modify the transition function of M to place additional fingers on all states that can be reached by going along ε arrows after every step. Replacing δ(r, a) by E(δ(r, a)) achieves this effect. Thus δ′ (R, a) = {q ∈ Q| q ∈ E(δ(r, a)) for some r ∈ R}. Additionally, we need to modify the start state of M to move the fingers ini- tially to all possible states that can be reached from the start state of N along the ε arrows. Changing q0 ′ to be E({q0}) achieves this effect. We have now completed the construction of the DFA M that simulates the NFA N. The construction of M obviously works correctly. At every step in the com- putation of M on an input, it clearly enters a state that corresponds to the subset of states that N could be in at that point. Thus our proof is complete.

A = {0k1k|k ≥0}is a member of (space)

L In Section 7.1, on page 275 we described a Turing machine that decides A by zig-zagging back and forth across the input, crossing off the 0s and 1s as they are matched. That algorithm uses linear space to record which positions have been crossed off, but it can be modified to use only log space. The log space TM for A cannot cross off the 0s and 1s that have been matched on the input tape because that tape is read-only. Instead, the machine counts the number of 0s and, separately, the number of 1s in binary on the work tape. The only space required is that used to record the two counters. In binary, each counter uses only logarithmic space and hence the algorithm runs in O(logn) space. Therefore, A ∈ L.

L = SPACE(logn)

L is the class of languages that are decidable in logarithmic space on a deterministic Turing machine

Space/time heirarchy

L ⊆ NL = coNL⊆P⊆ NP ⊆PSPACE = NPSPACE⊆ EXPTIME⊆EXPSPACE

show the language of properly closed parathenesis is L

Let M be a DTM that decides A in log space M="on input w: Where w is a sequence of parenthesis 1. Starting at the first character of w, move right across w. 2. When left paren encountered, add 1 to work-tape, move right 3. when right paren encountered, and work-tape blank, reject. ow subtract 1 from 1 work-tape and move right 4. At end, accept if work-tape blank. Reject if work-tape not blank. The space used by this algorithm is the counter on the work-tape If this counter is in binary, then the most space used by the algorithm is O(lnk), k being the total length of the input string w As the number k is less that or equal to n (the size of the tape), this places the language A in L. A in L

B is the language of properly nested parens and brackets, prove in L

Let M be a DTM that decides B in log space with the following construction M=" on input w: where w is a sequence of parens and brackets 1. if w= epsilon, then accept and halt 2. otherwise, take a counter i=0 3. Read input sequentially from left to right 4. When [ read, increment i by 1 5. When ] is read, decrement i by 1 6. If i becomes negative or is ow not restored to 0 at end of input, reject 7. Repeat steps 2-6 for ( and ) respectively, if not reject, accept. The work-tape is the only space used, and if the work-tape stores the counter in binary, the space used cannot be greater than O(lnk), where k is the total length of input w.

running time or time complexity

Let M be a deterministic Turing machine that halts on all inputs. The ________________ of M is the function f : N−→N, where f(n) is the maximum number of steps that M uses on any input of length n. If f(n) is the running time of M, we say that M runs in time f(n) and that M is an f(n) time Turing machine. Customarily we use n to represent the length of the input.

If A ≤P B and B ∈P, then A ∈P

Let M be the polynomial time algorithm deciding B and f be the polynomial time reduction from A to B. We describe a polynomial time algorithm N deciding A as follows. N = "On input w: 1. Compute f(w). 2. Run M on input f(w) and output whatever M outputs." We have w ∈ A whenever f(w) ∈ B because f is a reduction from A to B. Thus, M accepts f(w) whenever w ∈ A. Moreover, N runs in polynomial time because each of its two stages runs in polynomial time. Note that stage 2 runs in polynomial time because the composition of two polynomials is a polynomial

Savitchs Theorem Proof

Let N be an NTM deciding a language A in space f(n). We construct a deterministic TM M deciding A. Machine M uses the procedure CANYIELD, which tests whether one of N's configurations can yield another within a specified number of steps. This procedure solves the yieldability problem described in the proof idea. Let w be a string considered as input to N. For configurations c1 and c2 of N, and integer t, CANYIELD(c1,c2,t) outputs accept if N can go from configuration c1 to configuration c2 in t or fewer steps along some nondeterministic path. If not, CANYIELD outputs reject. For convenience, we assume that t is a power of 2. CANYIELD = "On input c1, c2, and t: 1. If t = 1, then test directly whether c1 = c2 or whether c1 yields c2 in one step according to the rules of N. Accept if either test succeeds; reject if both fail. 2. If t > 1, then for each configuration cm of N using space f(n): 3. Run CANYIELD(c1,cm, t 2). 4. Run CANYIELD(cm,c2, t 2). 5. If steps 3 and 4 both accept, then accept. 6. If haven't yet accepted, reject." Now we define M to simulate N as follows. We first modify N so that when it accepts, it clears its tape and moves the head to the leftmost cell—thereby entering a configuration called caccept. We let cstart be the start configuration of N on w. We select a constant d so that N has no more than 2df(n) configurations using f(n) tape, where n is the length of w. Then we know that 2df(n) provides an upper bound on the running time of any branch of N on w. M = "On input w: 1. Output the result of CANYIELD(cstart,caccept,2df(n))." Algorithm CANYIELD obviously solves the yieldability problem, and hence M correctly simulates N. We need to analyze it to verify that M works within O(f2(n)) space. Whenever CANYIELD invokes itself recursively, it stores the current stage number and the values of c1, c2, and t on a stack so that these values may be restored upon return from the recursive invocation. Each level of the recursion thus uses O(f(n)) additional space. Furthermore, each level of the recursion divides the size of t in half. Initially t starts out equal to 2df(n), so the depth of the recursion is O(log2df(n)) or O(f(n)). Therefore, the total space used is O(f2(n)), as claimed. One technical difficultyarises in this argument because algorithm M needs to know the value of f(n) when it calls CANYIELD. We can handle this difficulty by modifying M so that it tries f(n) = 1,2,3,... . For each value f(n) = i, the modified algorithm uses CANYIELD to determine whether the accept configuration is reachable. In addition, it uses CANYIELD to determine whether N uses at least space i + 1 by testing whether N can reach any of the configurations of length i+1 from the startconfiguration. If the accept configurationis reachable, M accepts; if no configuration of length i+1 is reachable, M rejects; and otherwise, M continues with f(n) = i + 1. (We could have handled this difficulty in another way by assuming that M can compute f(n) within O(f(n)) space, but then we would need to add that assumption to the statement of the theorem.)

The class of regular languages is closed under the union operation.

Let N1 = (Q1, Σ, δ1, q1, F1) recognize A1, and N2 = (Q2, Σ, δ2, q2, F2) recognize A2. Construct N = (Q, Σ, δ, q0, F) to recognize A1 ∪ A2. 1. Q = {q0} ∪ Q1 ∪ Q2. The states of N are all the states of N1 and N2, with the addition of a new start state q0. 2. The state q0 is the start state of N. 3. The set of accept states F = F1 ∪ F2. The accept states of N are all the accept states of N1 and N2. That way, N accepts if either N1 accepts or N2 accepts. 4. Define δ so that for any q ∈ Q and any a ∈ Σε, δ(q, a) = ⎧ ⎪⎪⎪⎨ ⎪⎪⎪⎩ δ1(q, a) q ∈ Q1 δ2(q, a) q ∈ Q2 {q1, q2} q = q0 and a = ε ∅ q = q0 and a ̸= ε.

The class of regular languages is closed under the concatenation operation

Let N1 = (Q1, Σ, δ1, q1, F1) recognize A1, and N2 = (Q2, Σ, δ2, q2, F2) recognize A2. Construct N = (Q, Σ, δ, q1, F2) to recognize A1 ◦ A2. 1. Q = Q1 ∪ Q2. The states of N are all the states of N1 and N2. 2. The state q1 is the same as the start state of N1. 3. The accept states F2 are the same as the accept states of N2. 4. Define δ so that for any q ∈ Q and any a ∈ Σε, δ(q, a) = ⎧ ⎪⎪⎪⎨ ⎪⎪⎪⎩ δ1(q, a) q ∈ Q1 and q ̸∈ F1 δ1(q, a) q ∈ F1 and a ̸= ε δ1(q, a) ∪ {q2} q ∈ F1 and a = ε δ2(q, a) q ∈ Q2.

The class of regular languages is closed under the star operation.

Let N1 = (Q1, Σ, δ1, q1, F1) recognize A1. Construct N = (Q, Σ, δ, q0, F) to recognize A∗ 1. 1. Q = {q0} ∪ Q1. The states of N are the states of N1 plus a new start state. 2. The state q0 is the new start state. 3. F = {q0} ∪ F1. The accept states are the old accept states plus the new start state. 4. Define δ so that for any q ∈ Q and any a ∈ Σε, δ(q, a) = ⎧ ⎪⎪⎪⎪⎪⎪⎨ ⎪⎪⎪⎪⎪⎪⎩ δ1(q, a) q ∈ Q1 and q ̸∈ F1 δ1(q, a) q ∈ F1 and a ̸= ε δ1(q, a) ∪ {q1} q ∈ F1 and a = ε {q1} q = q0 and a = ε ∅ q = q0 and a ̸= ε.

Arden's Lemma

Let P and Q be two regular expressions, if P does not contain null string, the R=Q+RP has unique solution that is R=QP*

Rice's Theorem problem

Let P be any nontrivial property of the language of a Turing machine. Prove that the problem of determining whether a given Turing machine's language has property P is undecidable. In more formal terms, let P be a language consisting of Turing machine descrip- tions where P fulfills two conditions. First, P is nontrivial—it contains some, but not all, TM descriptions. Second, P is a property of the TM's language—whenever L(M1) = L(M2), we have ⟨M1⟩ ∈ P iff ⟨M2⟩ ∈ P. Here, M1 and M2 are any TMs. Prove that P is an undecidable language.

recursion theorem

Let T be a Turing machine that computes a function t: Σ* × Σ*→Σ*. There is a Turing machine R that computes a function r : Σ*→Σ*, where for every w, r(w) = t(⟨R⟩, w) .

SPACE(f(n))

Let f : N−→R+ be a function. SPACE(f(n)) = {L|L is a language decided by an O(f(n)) space deterministic Turing machine}.

NSPACE(f(n))

Let f : N−→R+ be a function. NSPACE(f(n)) = {L|L is a language decided by an O(f(n)) space nondeterministic Turing machine}.

big o

Let f and g be functionsf,g: N−→R+. Say that f(n) = O(g(n)) if positive integers c and n0 exist such that for every integer n ≥n0, f(n) ≤ cg(n). When f(n) = O(g(n)), we say that g(n) is an upper bound for f(n), or more precisely, that g(n) is an asymptotic upper bound for f(n), to emphasize that we are suppressing constant factors

3SAT is polynomial time reducible to CLIQUE

Let φ be a formula with k clauses such as φ = (a1 ∨b1 ∨c1) ∧ (a2 ∨b2 ∨c2) ∧ ··· ∧ (ak ∨bk ∨ck). The reduction f generates the stringhG,ki, where G is an undirected graph defined as follows. The nodes in G are organized into k groups of three nodes each called the triples, t1,...,tk. Each triple corresponds to one of the clauses in φ, and each node in a triple corresponds to a literal in the associated clause. Label each node of G with its corresponding literal in φ. The edges of G connect all but two types of pairs of nodes in G. No edge is present between nodes in the same triple, and no edge is present between two nodes with contradictory labels, as in x2 and x2. Figure 7.33 illustrates this construction when φ = (x1 ∨x1 ∨x2) ∧ (x1 ∨x2 ∨x2) ∧ (x1 ∨x2 ∨x2) (see phone for construction) Now we demonstrate why this construction works. We show that φ is satisfiable iff G has a k-clique. Suppose that φ has a satisfying assignment. In that satisfying assignment, at least one literal is true in every clause. In each triple of G, we select one node corresponding to a true literal in the satisfying assignment. If more than one literal is true in a particular clause, we choose one of the true literals arbitrarily. The nodes just selected form a k-clique. The number of nodes selected is k because we chose one for each of the k triples. Each pair of selected nodes is joined by an edge because no pair fits one of the exceptions described previously. They could not be from the same triple because we selected only one node per triple. They could not have contradictory labels because the associated literals were both true in the satisfying assignment. Therefore, G contains a k-clique. Suppose that G has a k-clique. No two of the clique's nodes occur in the same triple because nodes in the same triple aren't connected by edges. Therefore, each of the k triples contains exactly one of the k clique nodes. We assign truth values to the variables of φ so that each literal labeling a clique node is made true. Doing so is always possible because two nodes labeled in a contradictory way are not connected by an edge and hence both can't be in the clique. This assignment to the variables satisfies φ because each triple contains a clique node and hence each clause contains a literal that is assigned TRUE. Therefore, φ is satisfiable.

Let A be the language consisting of all strings representing undirected graphs that are connected. Recall that a graph is connected if every node can be reached from every other node by traveling along the edges of the graph. We write A = {⟨G⟩| G is a connected undirected graph}.

M = "On input ⟨G⟩, the encoding of a graph G: 1. Select the first node of G and mark it. 2. Repeat the following stage until no new nodes are marked: 3. For each node in G, mark it if it is attached by an edge to a node that is already marked. 4. Scan all the nodes of G to determine whether they all are marked. If they are, accept; otherwise, reject."

Space Complexity

M is the function f : N−→N, where f(n) is the maximum number of tape cells that M scans on any input of length n. If the space complexity of M is f(n), we also say that M runs in space f(n). If M is a nondeterministic Turing machine wherein all branches halt on all inputs, we define its space complexity f(n) to be the maximum number of tape cells that M scans on any branch of its computation for any input of length n.

time to decide A = {0^k1^k|k ≥ 0}.

M1 = "On input string w: 1. Scan across the tape and reject if a 0 is found to the right of a 1. 2. Repeat if both 0s and 1s remain on the tape: 3. Scan across the tape, crossing off a single 0 and a single 1. 4. If 0s still remain after all the 1s have been crossed off, or if 1s still remain after all the 0s have been crossed off, reject. Otherwise, if neither 0s nor 1s remain on the tape, accept." To analyze M1, we consider each of its four stages separately. In stage 1, the machine scans across the tape to verify that the input is of the form 0∗1∗. Performing this scan uses n steps. As we mentioned earlier, we typically use n to represent the length of the input. Repositioning the head at the left-hand end of the tape uses another n steps. So the total used in this stage is 2n steps. In big-O notation, we say that this stage uses O(n) steps. Note that we didn't mention the repositioning of the tape head in the machine description. Using asymptotic notation allows us to omit details of the machine description that affect the running time by at most a constant factor. In stages 2 and 3, the machine repeatedly scans the tape and crosses off a 0 and 1 on each scan. Each scan uses O(n) steps. Because each scan crosses off two symbols, at most n/2 scans can occur. So the total time taken by stages 2 and 3 is (n/2)O(n) = O(n2) steps. In stage 4, the machine makes a single scan to decide whether to accept or reject. The time taken in this stage is at most O(n). Thus, the total time of M1 on an input of length n is O(n) + O(n2) + O(n), or O(n2). In other words, its running time is O(n2), which completes the time analysis of this machine.

SAT can be solved in linear space

M1 = "On inputhφi, where φ is a Boolean formula: 1. For each truth assignment to the variables x1,...,xm of φ: 2. Evaluate φ on that truth assignment. 3. If φ ever evaluated to 1, accept; if not, reject." Machine M1 clearly runs in linear space because each iteration of the loop can reuse the same portion of the tape. The machine needs to store only the current truth assignment, and that can be done with O(m) space. The number of variables m is at most n, the length of the input, so this machine runs in space O(n).

A ∈ TIME(nlogn).

M2 = "On input string w: 1. Scan across the tape and reject if a 0 is found to the right of a 1. 2. Repeat as long as some 0s and some 1s remain on the tape: 3. Scan across the tape, checking whether the total number of 0s and 1s remaining is even or odd. If it is odd, reject. 4. Scan again across the tape, crossing off every other 0 starting with the first 0, and then crossing off every other 1 starting with the first 1. 5. If no 0s and no 1s remain on the tape, accept. Otherwise, reject."

A in O(n)

M3 = "On input string w: 1. Scan across tape 1 and reject if a 0 is found to the right of a 1. 2. Scan across the 0s on tape 1 until the first 1. At the same time, copy the 0s onto tape 2. 3. Scan across the 1s on tape 1 until the end of the input. For each 1 read on tape 1, cross off a 0 on tape 2. If all 0s are crossed off before all the 1s are read, reject. 4. If all the 0s have now been crossed off, accept. If any 0s remain, reject."

EQrex (two equiv regexes) in PSPACE

M="on input (R,S), where R,S are equivalent regular expression: 1Construct NDFA Nx=(Qx, E, Deltx, qx, Fx) st L(Nx)=L(X) for X={R,S} 2. Let mx={qx} Repeat 2^max X in {R,S}^|Qx| times If mk intersection As = phi <=> As not in phi, accept Pick any a in E and change mx to U q in mx Deltx(q,a) for X in {R,S} Reject this nondet TM decides co EQrex in poly space PSPACE is closed under complement therefore EQrec closed under complement

ALLNFA = {<A>|A is an NFA and L(A) = Σ∗} is in space O(n)

N = "On inputhMi, where M is an NFA: 1. Place a marker on the start state of the NFA. 2. Repeat 2q times, where q is the number of states of M: 3. Nondeterministically select an input symbol and change the positions of the markers on M's states to simulate reading that symbol. 4. Accept if stages 2 and 3 reveal some string that M rejects; that is, if at some point none of the markers lie on accept states of M. Otherwise, reject." If M rejects any strings, it must reject one of length at most 2q because in any longer string that is rejected, the locations of the markers described in the preceding algorithm would repeat. The section of the string between the repetitions can be removed to obtain a shorter rejected string. Hence N decides ALLNFA. (Note that N accepts improperly formed inputs, too.) The only space needed by this algorithm is for storing the location of the markers and the repeat loop counter, and that can be done with linear space. Hence the algorithm runs in nondeterministic space O(n). Next, we prove a theorem that provides information about the deterministic space complexity of ALLNFA.

HAMPATH NTM

N1 = "On inputhG,s,ti, where G is a directed graph with nodes s and t: 1. Write a list of m numbers, p1,...,pm, where m is the number of nodes in G. Each number in the list is nondeterministically selected to be between 1 and m. 2. Check for repetitions in the list. If any are found, reject. 3. Check whether s = p1 and t = pm. If either fail, reject. 4. For each i between 1 and m−1, check whether (pi,pi+1) is an edge of G. If any are not, reject. Otherwise, all tests have been passed, so accept."

P -> Q

NOT P V Q

P V Q

NOT( NOT P AND NOT Q)

P XOR Q

NOT(P <->Q)

A9.3 Prove that NTIME(n) is a strict sub of PSPACE

NTIME(n) ⊆ NSPACE(n) because any Turing machine that operates in time t(n) on every computation branch can use at most t(n) tape cells on every branch. Furthermore, NSPACE(n) ⊆ SPACE(n2) due to Savitch's theorem. However, SPACE(n2) ( SPACE(n3) because of the space hierarchy theorem. The result follows because SPACE(n3) ⊆ PSPACE.

3SAT is NP-c

Obviously 3SAT is in NP, so we only need to prove that all languages in NP reduce to 3SAT in polynomial time. One way to do so is by showing that SAT polynomial time reduces to 3SAT. Instead, we modify the proof of Theorem 7.37 so that it directly produces a formula in conjunctive normal form with three literals per clause. Theorem 7.37 produces a formula that is already almost in conjunctive normal form. Formula φcell is a big AND of subformulas, each of which contains a big OR and a big AND of ORs. Thus, φcell is an AND of clauses and so is already in cnf. Formula φstart is a big AND of variables. Taking each of these variables to be a clause of size 1, we see that φstart is in cnf. Formula φaccept is a big OR of variables and is thus a single clause. Formula φmove is the only one that isn't already in cnf, butwe may easily convert it intoa formula thatis in cnf asfollows. Recall that φmove is a big AND of subformulas, each of which is an OR of ANDs that describes all possible legal windows. The distributive laws, as described in Chapter 0, state that we can replace an OR of ANDs with an equivalent AND of ORs. Doing so may significantly increase the size of each subformula, but it can only increase the total size of φmove by a constant factor because the size of each subformula depends only on N. The result is a formula that is in conjunctive normal form. Now that we have written the formula in cnf, we convert it to one with three literals per clause. In each clause that currently has one or two literals, we replicate one of the literals until the total number is three. In each clause that has more than three literals, we split it into several clauses and add additional variables to preserve the satisfiability or nonsatisfiability of the original. For example, we replace clause (a1∨a2∨a3∨a4), wherein each ai is a literal, with the two-clause expression (a1 ∨a2 ∨z)∧(z ∨a3 ∨a4), wherein z is a new variable. If some setting of the ai's satisfies the original clause, we can find some setting of z so that the two new clauses are satisfied and vice versa. In general, if the clause contains l literals, (a1 ∨a2 ∨···∨al), we can replace it with the l−2 clauses (a1 ∨a2 ∨z1)∧(z1 ∨a3 ∨z2)∧(z2 ∨a4 ∨z3)∧···∧(zl−3 ∨al−1 ∨al). We may easily verify that the new formula is satisfiable iff the original formula was, so the proof is complete.

There is a computable function q : Σ∗−→Σ∗, where if w is any string, q(w) is the description of a Turing machine Pw that prints out w and then halts.

Once we understand the statement of this lemma, the proof is easy. Obviously, we can take any string w and construct from it a Turing machine that has w built into a table so that the machine can simply output w when started. The following TM Q computes q(w). Q = "On input string w: 1. Construct the following Turing machine Pw. Pw = "On any input: 1. Erase input. 2. Write w on the tape. 3. Halt." 2. Output ⟨Pw⟩."

SAT ={hφi|φ is a satisfiable Boolean formula}, and in P iff

P=NP

PSPACE closed uner union, complementation, star

PSPACE is the class of languages that are decidable in polynomial space on a det TM, PSPACE= USPACE(N^k) 2 languages, L1, L2 decided by PSPACE TMs M1 and M2 M1 decides L1 in O(n^k) and M2 decides L2 in det time O(n^L) Each are solvable in poly time, thus they can be solved in polynomial space. union-- M="on input w 1. Run M1 on w, iff M1 accepted, then accept 2. Else run M2 on w, iff M2 accepts then accept 3. Else reject longest branch on any computation tree input w of length n in O(n^max{k,l}) Thus, M is a polyno time det decideder for L1 UL2. If any polyno solvable in polynomial time, then solvable in polynomial space. Thus, PSPACE is closed under union Complementation: M="on input w: 1. Run M1 on w, if M1 accepts then reject 2. Else accept The longest branch in any computation tree w of length n is O(n^{k}) THis M is a polynomial time det decider for complement of L, as any poly solvable in poly time can be solved in polynomial space. Therefore, PSPACE closed under complementation star M="on input w: 1: if w=epsilon, accept 2. det select a number m st 1<=m<=|w| 3. Det split w into pieces st w=w1w2w3...wm 4. For all i, 1<=i<=mL run M1 on wi, if M1 rejects then reject 5. Else( M1 accepted all wi, 1<=i<=m) accept The steps 1 and 2 take O(m) time. Step 3 also ^k). Total times is O(m^(k+1)). This means that M polyno time decides L1*. Solving in polynomial time means it can be solved in polynomial space, therefir PSPACE is closed under star.

time complexity class

TIME(t(n)) where t:N-> R+ is a function, is the collection of all languages that are decidable by an O(t(n)) TM

A9.2 Prove that TIME(2^n) does not sub TIME(2^(2n)).

The containment TIME(2n) ⊆ TIME(22n) holds because 2n ≤ 22n. The containment is proper by virtue of the time hierarchy theorem. The function 22n is time constructible because a TM can write the number 1 followed by 2n 0s in O(22n) time. Hence the theorem guarantees that a language A exists that can be decided in O(22n) time but not in o(22n/log22n) = o(22n/2n) time. Therefore, A ∈ TIME(22n) but A 6∈ TIME(2n).

Time hierarchy proof

The following O(t(n)) time algorithm D decides a language A that is not decidable in o(t(n)/logt(n)) time. D = "On input w: 1. Let n be the length of w. 2. Compute t(n) using time constructibility and store the value dt(n)/logt(n)e in a binary counter. Decrement this counter before each step used to carry out stages 4 and 5. If the counter ever hits 0, reject. 3. If w is not of the formhMi10∗ for some TM M, reject. 4. Simulate M on w. 5. If M accepts, then reject. If M rejects, then accept."

church Turing thesis

The thesis that if there exists an algorithm to do a symbol manipulation task, then there exists a Turing machine to do that task. lambda calc and tms are equivalent intuitive notion of algorithms equals tm algorithms

9.1 Prove that TIME(2n) = TIME(2n+1).

The time complexity classesare definedin terms of thebig-O notation, so constant factors have no effect. The function 2n+1 is O(2n) and thus A ∈ TIME(2n) iff A ∈ TIME(2n+1).

Ackermann Function

Total function that is not prim recursive { n+1 if m=0 A(m,n)= { A(m-1,1) if m>0 and n=0 { A(m-1, A(m, n-1)) if m>0 and n>0

EQTM is neither

Turing recog nor Co Turing recog

SUBSET-SUM is NP-c

We already know that SUBSET-SUM ∈ NP, so we now show that 3SAT ≤P SUBSET-SUM. Let φ be a Boolean formula with variables x1,...,xl and clauses c1,...,ck. The reduction converts φ to an instance of the SUBSET-SUM problem hS,ti, wherein the elements of S and the number t are the rows in the table in Figure 7.57, expressed in ordinary decimal notation. The rows above the double line are labeled y1,z1, y2,z2,...,yl,zl and g1,h1, g2,h2,...,gk,hk and constitute the elements of S. The row below the double line is t. Thus, S contains one pair of numbers, yi,zi, for each variable xi in φ. The decimal representation of these numbers is in two parts, as indicated in the table. The left-hand part comprises a 1 followed by l − i 0s. The right-hand part contains one digit for each clause, where the digit of yi in column cj is 1 if clause cj contains literal xi, and the digit of zi in column cj is 1 if clause cj contains literal xi. Digits not specified to be 1 are 0. The table is partially filled in to illustrate sample clauses, c1, c2, and ck: (x1 ∨x2 ∨x3) ∧ (x2 ∨x3 ∨···)∧ ··· ∧(x3 ∨···∨···). Additionally, S contains one pair of numbers, gj,hj, for each clause cj. These two numbers are equal and consist of a 1 followed by k−j 0s. Next, we show why this construction works. We demonstrate that φ is satisfiable iff some subset of S sums to t. Suppose that φ is satisfiable. We construct a subset of S as follows. We select yi if xi is assigned TRUE in the satisfying assignment, and zi if xi is assigned FALSE. If we add up what we have selected so far, we obtain a 1 in each of the firstl digitsbecausewe have selectedeither yi orzi foreach i. Furthermore, each of the last k digits is a number between 1 and 3 because each clause is satisfied and so contains between 1 and 3 true literals. We additionally select enough of the g and h numbers to bring each of the last k digits up to 3, thus hitting the target. Suppose that a subset of S sums to t. We construct a satisfying assignment to φ after making several observations. First, all the digits in members of S are either 0 or 1. Furthermore, each column in the table describing S contains at most five 1s. Hence a "carry" into the next column never occurs when a subset of S is added. To get a 1 in each of the first l columns, the subset must have either yi or zi for each i, but not both Now we make the satisfying assignment. If the subset contains yi, we assign xi TRUE; otherwise, we assign it FALSE. This assignment must satisfy φ because in each of the final k columns, the sum is always 3. In column cj, at most 2 can come from gj and hj, so at least 1 in this column must come from some yi or zi in the subset. If it is yi, then xi appears in cj and is assigned TRUE, so cj is satisfied. If it is zi, then xi appears in cj and xi is assigned FALSE, so cj is satisfied. Therefore, φ is satisfied. Finally, we must be sure that the reduction can be carried out in polynomial time. The table has a size of roughly (k + l)2 and each entry can be easily calculated for any φ. So the total time is O(n2) easy stages.

nondeterministic finite automaton

a 5-tuple (Q, Σ, δ, q0, F), where 1. Q is a finite set of states, 2. Σ is a finite alphabet, 3. δ : Q × Σε−→P(Q) is the transition function, 4. q0 ∈ Q is the start state, and 5. F ⊆ Q is the set of accept states.

HAMPATH is NP-complete

We previously demonstrated that HAMPATH is in NP, so all that remains to be done is to show 3SAT ≤P HAMPATH. For each 3cnf-formula φ, we show how to construct a directed graph G with two nodes, s and t, where a Hamiltonian path exists between s and t iff φ is satisfiable. We start the construction with a 3cnf-formula φ containing k clauses, φ = (a1 ∨b1 ∨c1)∧(a2 ∨b2 ∨c2)∧ ··· ∧(ak ∨bk ∨ck), where each a, b, and c is a literal xi or xi. Let x1,...,xl be the l variables of φ. Now we show how to convert φ to a graph G. The graph G that we construct has various parts to represent the variables and clauses that appear in φ. We represent each variable xi with a diamond-shaped structure that contains a horizontal row of nodes, as shown in the following figure. Later we specify the number of nodes that appear in the horizontal row. (construction in phone) clause c_j is a single node, each line through diamond is x_i to c_i, with the topmost point being S and bottom most being T (construction in phone) Next, we show how to connect the diamonds representing the variables to the nodes representing the clauses. Each diamond structure contains a horizontal row of nodes connected by edges running in both directions. The horizontal row contains 3k+1 nodes in addition to the two nodes on the ends belonging to the diamond. These nodes are grouped into adjacent pairs, one for each clause, with extra separator nodes next to the pairs, as shown in the following figure. (construction in phone) If xi appears in clause cj, we add two edges from the jth pair in the ith diamond to the jth clause node, as shown in Figure 7.52. After we add all the edges corresponding to each occurrence of xi or xi in each clause, the construction of G is complete. To show that this construction works, we argue that if φ is satisfiable, a Hamiltonian path exists from s to t; and, conversely, if such a path exists, φ is satisfiable. Suppose that φ is satisfiable. To demonstrate a Hamiltonian path from s to t, we first ignore the clause nodes. The path begins at s, goes through each diamond in turn, and ends up at t. To hit the horizontal nodes in a diamond, the path either zig-zags from left to right or zag-zigs from right to left; the satisfying assignment to φ determines which. If xi is assigned TRUE, the path zig-zags through the corresponding diamond. If xi is assigned FALSE, the path zag-zigs. We show both possibilities in the following figure (in phone...) Hamilitonian path must progress from s to t by entering through a clause, zigging or zagging through the clause, (true/false respectively), exiting the clause, and beginning a new one to completion. Else the path is not "normal" and cannot represent 3SAT. Reduction operates in polynomial time and proof complete

Rule for why using polyno time reduc instead of polyno space reduc for PSPACE-complete problems?

Whenever we define complete problems for a complexity class, the reduction model must be more limited than the model used for defining the class itself. polyno space doesnt have easy reducs, so we use time

context free grammar

a 4-tuple (V, Σ, R, S), where 1. V is a finite set called the variables, 2. Σ is a finite set, disjoint from V , called the terminals, 3. R is a finite set of rules, with each rule being a variable and a string of variables and terminals, and 4. S ∈ V is the start variable

Finite Automata (FA)

a 5-tuple (Q, Σ, δ, q0, F), where 1. Q is a finite set called the states, 2. Σ is a finite set called the alphabet, 3. δ : Q × Σ−→Q is the transition function, 1 4. q0 ∈ Q is the start state, and 5. F ⊆ Q is the set of accept states. 2

Language A is polynomial time mapping reducible to language B, A ≤P B, if

a polynomial time computable function f : Σ∗→Σ∗ exists, where for every w, w ∈A ⇐⇒ f(w) ∈B. The function f is called the polynomial time reduction of A to B.

SUBSET-SUM is in NP

certificate: the subset The following is a verifier V for SUBSET-SUM. V = "On inputhhS,ti,ci: 1. Test whether c is a collection of numbers that sum to t. 2. Test whether S contains all the numbers in c. 3. If both pass, accept; otherwise, reject." ALTERNATIVE PROOF We can also prove this theorem by giving a nondeterministic polynomial time Turing machine for SUBSET-SUM as follows. N = "On inputhS,ti: 1. Nondeterministically select a subset c of the numbers in S. 2. Test whether c is a collection of numbers that sum to t. 3. If the test passes, accept; otherwise, reject."

difference between in a class, class-hard, and class-complete

class membership is required to be in a class. a language A is class-hard is if every language in the class is polytime reducible to A, but A is not a member. If both, class-complete

Language A is mapping ______ to language B, written A ≤m B, if there is a computable function f : Σ∗−→Σ∗, where for every w, w ∈ A ⇐⇒ f(w) ∈ B. The function f is called the reduction from A to B.

mapping reducible(formal)

oracle tm

mod'd TM that has additional capability of querying an oracle. M^B describes a TM that has a oracle for language B

big o says a function is no

more than anothter

Why is Savitchs theorem important

nondet TMs that use f(n) space (where f(n) > n) can be converted to det TMs that use only f^2(n) space (yayy)

which more powerful, pdas

nondeterministic pushdown automata are more powerful than their deterministic counterparts

the stack

provides additional memory beyond the finite amount available in the control. The stack allows pushdown automata to recognize some nonregular languages.

Every DPDA has an equivalent DPDA that always

reads the entire input string

Type 0

recursively enumerable languages; recognized by Turing machine. of form a->B where a,B in (E U V)* y1ay2=>G y1By2 iff a->B in R. L(G) {w=E*|S *=>G w}

ΣiTIME(f(n))

the class of languages that a Σi-alternating TM can decide in O(f(n)) time.

NL= NSPACE(logn)

the class of languages that are decidable in logarithmic space on a nondeterministic Turing machine.

PSPACE

the class of languages that are decidable in polynomial space on a deterministic Turing machine. In other words, PSPACE =U SPACE(n^k) k

P

the class of languages that are decidable in polynomial time on a deterministic single-tape Turing machine. In other words, P =U TIME(nk). 1P is invariant for all models of computation that are polynomially equivalent to the deterministic single-tape Turing machine, and 2. P roughly corresponds to the class of problems that are realistically solvable on a computer.

NP

the class of languages that have polynomial time verifiers

Poly time heirarchy

the collection of classes ΣiP =Uk ΣiTIME(n^k) and ΠiP =Uk ΠiTIME(n^k). Define class PH =Ui ΣiP =Ui ΠiP. Clearly, NP = Σ1P and coNP = Π1P. Additionally, MIN-FORMULA ∈Π2P.

If A ≤m B and B is Turing-recognizable

then A is Turing recog.

If A ≤m B and A is undecidable

then B is undecidable In Theorem 5.1 we used a reduction from ATM to prove that HALT TM is un- decidable. This reduction showed how a decider for HALT TM could be used to give a decider for ATM. We can demonstrate a mapping reducibility from ATM to HALT TM as follows. To do so, we must present a computable function f that takes input of the form ⟨M,w⟩ and returns output of the form ⟨M′ , w′ ⟩, where ⟨M,w⟩ ∈ ATM if and only if ⟨M′ , w′ ⟩ ∈ HALT TM. The following machine F computes a reduction f. F = "On input ⟨M,w⟩: 1. Construct the following machine M′ . M′ = "On input x: 1. Run M on x. 2. If M accepts, accept. 3. If M rejects, enter a loop." 2. Output ⟨M′ , w⟩." A minor issue arises here concerning improperly formed input strings. If TM F determines that its input is not of the correct form as specified in the input line "On input ⟨M,w⟩:" and hence that the input is not in ATM, the TM outputs a string not in HALT TM. Any string not in HALT TM will do. In general, when we describe a Turing machine that computes a reduction from A to B, improperly formed inputs are assumed to map to strings outside of B.

TQBF

to determine whether a fully quantified Boolean formula is true or false. We define the language TQBF = {hφi|φ is a true fully quantified Boolean formula}.

Chomsky heirarchy

type 0 includes recursively enumerable languages(types 0123), type 1 includes context free (types 123), type 2 includes context sensitive languages (type 23) and type 3 includes regular languages (type 3). together these rules allow one to describe different aspects of natural language, which form a heirarchry.

atm is (recursion)

undecidable We assume that Turing machine H decides ATM, for the purpose of obtaining a contradiction. We construct the following machine B. B = "On input w: 1. Obtain, via the recursion theorem, own description ⟨B⟩. 2. Run H on input ⟨B, w⟩. 3. Do the opposite of what H says. That is, accept if H rejects and reject if H accepts." Running B on input w does the opposite of what H declares it does. Therefore, H cannot be deciding ATM. Done!

indistinguishable

xz ∈ L whenever yz ∈ L and we say that x and y a

NTIME(t(n))

{L|L is a language decided by an O(t(n)) time nondeterministic Turing machine}

co-NP

{complements of problems that are NP}. Verify with a "No" answer. We don't know whether coNP is different from NP

EQDFA

{⟨A, B⟩| A and B are DFAs and L(A) = L(B)}. decidable To prove this theorem, we use Theorem 4.4. We construct a new DFA C from A and B, where C accepts only those strings that are accepted by either A or B but not by both. Thus, if A and B recognize the same language, C will accept nothing. The language of C is L(C) = 2 L(A) ∩ L(B) 3 ∪ 2 L(A) ∩ L(B) 3 . This expression is sometimes called the symmetric difference of L(A) and L(B) and is illustrated in the following figure. Here, L(A) is the complement of L(A). The symmetric difference is useful here because L(C) = ∅ iff L(A) = L(B). We can construct C from A and B with the constructions for proving the class of regular languages closed under complementation, union, and intersection. These constructions are algorithms that can be carried out by Turing machines. Once we have constructed C, we can use Theorem 4.4 to test whether L(C) is empty. If it is empty, L(A) and L(B) must be equal. F = "On input ⟨A, B⟩, where A and B are DFAs: 1. Construct DFA C as described. 2. Run TM T from Theorem 4.4 on input ⟨C⟩. 3. If T accepts, accept. If T rejects, reject."

ADFA =

{⟨B, w⟩| B is a DFA that accepts input string w}. decidable We simply need to present a TM M that decides ADFA. M = "On input ⟨B, w⟩, where B is a DFA and w is a string: 1. Simulate B on input w. 2. If the simulation ends in an accept state, accept. If it ends in a nonaccepting state, reject." We mention just a few implementation details of this proof. For those of you familiar with writing programs in any standard programming lan- guage, imagine how you would write a program to carry out the simulation. First, let's examine the input ⟨B, w⟩. It is a representation of a DFA B together with a string w. One reasonable representation of B is simply a list of its five components: Q, Σ, δ, q0, and F. When M receives its input, M first determines whether it properly represents a DFA B and a string w. If not, M rejects. Then M carries out the simulation directly. It keeps track of B's current state and B's current position in the input w by writing this information down on its tape. Initially, B's current state is q0 and B's current input position is the leftmost symbol of w. The states and position are updated according to the specified transition function δ. When M finishes processing the last symbol of w, M accepts the input if B is in an accepting state; M rejects the input if B is in a nonaccepting state.

ANFA =

{⟨B, w⟩| B is an NFA that accepts input string w} decidable N = "On input ⟨B, w⟩, where B is an NFA and w is a string: 1. Convert NFA B to an equivalent DFA C, using the procedure for this conversion given in Theorem 1.39. 2. Run TM M from Theorem 4.1 on input ⟨C, w⟩. 3. If M accepts, accept; otherwise, reject."

ACFG

{⟨G, w⟩| G is a CFG that generates string w}. decideable For CFG G and string w, we want to determine whether G generates w. One idea is to use G to go through all derivations to determine whether any is a derivation of w. This idea doesn't work, as infinitely many derivations may have to be tried. If G does not generate w, this algorithm would never halt. This idea gives a Turing machine that is a recognizer, but not a decider, for ACFG. To make this Turing machine into a decider, we need to ensure that the al- gorithm tries only finitely many derivations. In Problem 2.26 (page 157) we showed that if G were in Chomsky normal form, any derivation of w has 2n − 1 steps, where n is the length of w. In that case, checking only derivations with 2n − 1 steps to determine whether G generates w would be sufficient. Only finitely many such derivations exist. We can convert G to Chomsky normal form by using the procedure given in Section 2.1. PROOF The TM S for ACFG follows. S = "On input ⟨G, w⟩, where G is a CFG and w is a string: 1. Convert G to an equivalent grammar in Chomsky normal form. 2. List all derivations with 2n−1 steps, where n is the length of w; except if n = 0, then instead list all derivations with one step. 3. If any of these derivations generate w, accept; if not, reject."

ECFG

{⟨G⟩| G is a CFG and L(G) = ∅}. decidable To find an algorithm for this problem, we might attempt to use TM S from Theorem 4.7. It states that we can test whether a CFG generates some particular string w. To determine whether L(G) = ∅, the algorithm might try going through all possible w's, one by one. But there are infinitely many w's to try, so this method could end up running forever. We need to take a different approach. In order to determine whether the language of a grammar is empty, we need to test whether the start variable can generate a string of terminals. The algo- rithm does so by solving a more general problem. It determines for each variable whether that variable is capable of generating a string of terminals. When the algorithm has determined that a variable can generate some string of terminals, the algorithm keeps track of this information by placing a mark on that variable. First, the algorithm marks all the terminal symbols in the grammar. Then, it scans all the rules of the grammar. If it ever finds a rule that permits some vari- able to be replaced by some string of symbols, all of which are already marked, the algorithm knows that this variable can be marked, too. The algorithm con- tinues in this way until it cannot mark any additional variables. The TM R implements this algorithm. PROOF R = "On input ⟨G⟩, where G is a CFG: 1. Mark all terminal symbols in G. 2. Repeat until no new variables get marked: 3. Mark any variable A where G has a rule A → U1U2 ···Uk and each symbol U1,...,Uk has already been marked. 4. If the start variable is not marked, accept; otherwise, reject."

ATM

{⟨M,w⟩| M is a TM and M accepts w}. Undecidable Before we get to the proof, let's first observe that ATM is Turing-recognizable. Thus, this theorem shows that recognizers are more powerful than deciders. Requiring a TM to halt on all inputs restricts the kinds of languages that it can recognize. The following Turing machine U recognizes ATM. U = "On input ⟨M,w⟩, where M is a TM and w is a string: 1. Simulate M on input w. 2. If M ever enters its accept state, accept; if M ever enters its reject state, reject." We assume that ATM is decidable and obtain a contradiction. Sup- pose that H is a decider for ATM. On input ⟨M,w⟩, where M is a TM and w is a string, H halts and accepts if M accepts w. Furthermore, H halts and rejects if M fails to accept w. In other words, we assume that H is a TM, where H' ⟨M,w⟩ ( = 4 accept if M accepts w reject if M does not accept w. Now we construct a new Turing machine D with H as a subroutine. This new TM calls H to determine what M does when the input to M is its own description ⟨M⟩. Once D has determined this information, it does the opposite. That is, it rejects if M accepts and accepts if M does not accept. The following is a description of D. D = "On input ⟨M⟩, where M is a TM: 1. Run H on input ⟨M,⟨M⟩⟩. 2. Output the opposite of what H outputs. That is, if H accepts, reject; and if H rejects, accept." Don't be confused by the notion of running a machine on its own description! That is similar to running a program with itself as input, something that does occasionally occur in practice. For example, a compiler is a program that trans- lates other programs. A compiler for the language Python may itself be written in Python, so running that program on itself would make sense. In summary, D' ⟨M⟩ ( = 4 accept if M does not accept ⟨M⟩ reject if M accepts ⟨M⟩. What happens when we run D with its own description ⟨D⟩ as input? In that case, we get D' ⟨D⟩ ( = 4 accept if D does not accept ⟨D⟩ reject if D accepts ⟨D⟩. No matter what D does, it is forced to do the opposite, which is obviously a contradiction. Thus, neither TM D nor TM H can exist.

EQTM

{⟨M1, M2⟩| M1 and M2 are TMs and L(M1) = L(M2)}. undecidable We let TM R decide EQTM and construct TM S to decide ETM as follows. S = "On input ⟨M⟩, where M is a TM: 1. Run R on input ⟨M,M1⟩, where M1 is a TM that rejects all inputs. 2. If R accepts, accept; if R rejects, reject." If R decides EQTM, S decides ETM. But ETM is undecidable by Theorem 5.2, so EQTM also must be undecidable.

REGULARTM

{⟨M⟩| M is a TM and L(M) is a regular language undecidable We let R be a TM that decides REGULARTM and construct TM S to decide ATM. Then S works in the following manner. S = "On input ⟨M,w⟩, where M is a TM and w is a string: 1. Construct the following TM M2. M2 = "On input x: 1. If x has the form 0n1n, accept. 2. If x does not have this form, run M on input w and accept if M accepts w." 2. Run R on input ⟨M2⟩. 3. If R accepts, accept; if R rejects, reject."

ELBA

{⟨M⟩| M is an LBA where L(M) = ∅}. undecidable Now we are ready to state the reduction of ATM to ELBA. Suppose that TM R decides ELBA. Construct TM S to decide ATM as follows. S = "On input ⟨M,w⟩, where M is a TM and w is a string: 1. Construct LBA B from M and w as described in the proof idea. 2. Run R on input ⟨B⟩. 3. If R rejects, accept; if R accepts, reject." If R accepts ⟨B⟩, then L(B) = ∅. Thus, M has no accepting computation history on w and M doesn't accept w. Consequently, S rejects ⟨M,w⟩. Similarly, if R rejects ⟨B⟩, the language of B is nonempty. The only string that B can accept is an accepting computation history for M on w. Thus, M must accept w. Consequently, S accepts ⟨M,w⟩. Figure 5.12 illustrates LBA B.

AREX =

{⟨R, w⟩| R is a regular expression that generates string w} decidable P = "On input ⟨R, w⟩, where R is a regular expression and w is a string: 1. Convert regular expression R to an equivalent NFA A by using the procedure for this conversion given in Theorem 1.54. 2. Run TM N on input ⟨A, w⟩. 3. If N accepts, accept; if N rejects, reject."


Related study sets

Strategic Management Test 1 (Ch 1-6)

View Set

Chapter 2 Concepts- Competitiveness, Strategy, Productivity

View Set

Chapter 1 - What Is Human Factors?

View Set

Chapter 21: The Price Level and Inflation

View Set