ToC

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

Given two sets A and B, a function f:A->B is onto if:

(∀ b ∈ B) (∃ a ∈ A) [f(a) = b] for all b in B there exists an a in A where f(a) = b

How to convert a CFG into Chomsky Normal Form:

1. Add a new start variable S0 and the rule S0 -> S, where S was the original start variable. 2. Take care of all ɛ-rules. We remove an ɛ-rule A -> ɛ, where A is not the start variable. Then for each occurrence of an A on the right-hand side of a rule, we add a new rule with that occurrence deleted. 3. Handle all unit rules. We remove a unit rule A -> B. Then, whenever a rule B -> u appears, we add the rule A -> u unless this was a unit rule previously removed. As before, u is a string of variables and terminals. 4. Convert all remaining rules into the proper form. We replace each rule A -> u1 u2 ... uk, where k >= 3 and each ui is a variable or terminal symbol, with the rules A -> u1A1, A1 -> u2A2, A2 -> u3A3, ... , and Ak-2 -> uk-1, uk. The Ai's are new variables. If k=2, we replace any terminal ui in the preceding rule(s) with the new variable Ui and add the rule Ui -> ui.

What are the two ways we can prove a language is context-free?

1. Give a CFG generating the language 2. Give a PDA recognizing the language

What is the process of CONVERT(G)?

1. Let k be the number of states of G. 2. If k=2, then G has a start state and an accept state and a single arrow connects the two states. Return the label of that arrow as the regular expression R that we want. 3. If k>2, select any qrip ∈ Q different from qstart and qaccept, and let G' be the GNFA (Q', Σ, δ', qstart, qaccept) where Q' = Q - {qrip}, and for any qi ∈ Q' - {qaccept} and any qj ∈ Q' - {qstart} let δ'(qi, qj) = (R1)(R2)*(R3)∪(R4), for Ri = δ(qi, qrip), R2 = δ(qrip, qrip), R3 = δ(qrip, qj), and R4 = δqi, qj). 4. Compute CONVERT(G') and return this value.

How to convert a DFA to a CFG:

1. Make a variable Ri for each state qi of the DFA. 2. Add the rule Ri -> aRj if δ(qi, a) = qj is a valid transition in the DFA. 3. Add the rule Ri -> ɛ if qi is an accept state of the DFA. 4. Make R0 the start variable of the grammar (corresponding to q0 which is the start variable of the machine).

What is the informal description of a PDA P?

1. Place the marker symbol $ and the start variable on the stack. 2. Repeat the following steps forever. 2a. If the top of the stack is a variable symbol A, nondeterministically select one of the rules for A and substitute A by the string on the right-hand side of the rule. 2b. If the top of the stack is a terminal symbol a, read the next symbol from the input and compare it to a. If they match, continue. Otherwise, reject this branch of nondeterminism. 2c. If the top of the stack is the symbol $, enter the accept state.

F1.61: GNFAs have a special form that meets the following three conditions:

1. The start state has transition arrows going to every other state but no arrows coming in from any other state. 2. There is only a single accept state, and it has arrows coming in from every other state but no arrows going to any other state. Furthermore, the accept state is not the same as the start state. 3. Except for the start and except states, one arrow goes from every every state to every other state and also from each state to itself.

Describe the process of derivations (generating strings):

1. Write down the start variable. 2. Find a variable that is written down and a rule that starts with that variable. Replace the variable with the right-hand side of that rule. 3. Repeat step 2 until no variables remain.

The configuration of a Turing machine shows you:

1. the current state 2. current tape contents 3. current head location

D2.8: Chomsky Normal Form

A CFG is in Chomsky Normal Form if every rule is of the form: A -> BC A -> a

D3.3: What is the formal definition of a Turing machine?

A Turing Machine is a 7-tuple (Q, ∑, Γ, δ, q0, qaccept, qreject) where Q, ∑, Γ are all finite sets and 1. Q is the set of states, 2. ∑ is the input alphabet not containing the blank symbol _ , 3. Γ is the tape alphabet, where _ ∈ Γ and ∑ ⊆ Γ, 4. δ: Q x Γ -> Q x Γ x {L, R} is the transition function, 5. q0 ∈ Q is the start state, 6. qaccept ∈ Q is the accept state, 7. qreject ∈ Q, is the reject state, where qreject != qaccept

What is an enumerator?

A Turing machine and a printer

D1.16: What is a regular language?

A language is called a regular language if some finite automaton recognizes it.

D2.13: What is the formal definition of a PDA?

A pushdown automaton is as a 6-tuple (Q, Σ, Γ, δ, q0, F), where Q, Σ, Γ, and F are all finite sets, and 1. Q is the set of states, 2. Σ is the input alphabet, 3. Γ is the stack alphabet, 4. δ: Q x Σɛ x Γɛ -> P(Q x Γɛ) is the transition function, 5. q0 ∈ Q is the start state, and 6. F ⊆ Q is the set of accept states.

T1.70: The pumping lemma for regular languages:

All regular languages have a special property. If a language does not have this property, then it is not regular. Property: All strings in the language can be "pumped" if they are at least as long as a certain special value called the pumping length. Each such string contains a section that can be repeated any number of times with the resulting string remaining in the language. Note: 1. x or z may be ɛ, but y cannot (C#2). 2. The theorem would be trivially true without C#2. 3. C#3 is an extra technical condition that might be useful.

How do we use the pumping lemma in a proof?

Assume a language is regular with pumping length p such that all strings of length p or greater can be pumped. Find a string s ∈ A such that |s| >= p but s cannot be pumped no matter how we split s into xyz.

What are (nondeterministic) PDAs equivalent in power to?

CFGs Note: We are focusing on nondeterministic PDAs because they can recognize languages that no deterministic PDA can.

What makes context-free grammars different from finite automata and regular expressions?

CFGs are a more powerful method of describing languages. Such grammars can describe certain features that have a recursive structure.

DFA vs. NFA

DFA: Deterministic; when the machine is in a given state and reads the next input symbol, we know what the next state will be. NFA: Nondeterministic; several choices may exist for the next state at any point. Every state of a DFAs always has exactly one exiting transition arrow for each symbol in the alphabet. Every state of an NFA may have zero or more exiting transition arrows for each symbol in the alphabet as well as ɛ.

T2.3: What is the pumping lemma for context-free languages?

If A is a CFL, then there is a number p (the pumping length) where, if s is any string in A of length at least p, then s may be divided into five pieces s = uvxyz satisfying the conditions: 1. for each i >= 0, uv^ixy^iz ∈ A, 2. |vy| > 0 (either v or y is not ɛ), and 3. |vxy| <= p

C5.23: If A <=m B and A is undecidable, then B is undecidable. What is the proof for this?

In Theorem 5.1 we reduced ATM to HALTTM. We need a function f <M, w> ∈ ATM -> <M', w'> ∈ HALTTM where <M, w> ∈ ATM iff <M', w'> ∈ HALTTM. F = "On input <M, w>: 1. Construct the TM M': M' = "On input x: 1. Run M on x. 2. If M accepts, accept. 3. If M rejects, enter a loop." 2. Output <M', w'>"

Problem A being reducible to Problem B implies:

It is harder to find a solution for B compared to A. A solution to B provides a solution to A, so A cannot be harder than B. If A is "hard" and A is reducible to B, then B is also "hard" (or "harder.")

T4.22: A language is decidable iff ... . What is the proof for it?

L is Turing-recognizable and co-Turing recognizable Proof Idea: If L is decidable, then L and the complement of L are both Turing-recognizable. (The complement of a decidable language is also decidable). If both L and co-L are Turing-recognizable, let M1 recognize L and M2 recognize co-L. Now consider the following TM M: M = "On input w: 1. Run both M1 and M2 on input w in parallel. 2. If M1 accepts, accept; if M2 accepts, reject." For every string w ∈ ∑*, we have either w ∈ L or w ∈ co-L. Therefore, either M1 or M2 accepts w, then M always halts with an answer, so M is a decider for L, so L is decidable.

How does a PDA compute?

Let M = (Q, Σ, Γ, δ, q0, F) be a PDA. M accepts input w if w can be written as w = w1 w2 ... wm, where each wi ∈ Σɛ and a sequence of states r0, r1, ..., rm ∈ Q and strings s0, s1, ..., sm ∈ Γ* exist that satisfy the following conditions: 1. r0 = q0 and s0 = ɛ . This condition signifies that M starts properly at q0 with an empty stack. 2. For some i = 0, ..., m-1, e have ri+1, b) ∈ δ(ri, wi+1, a), where si = at and si+1 = bt for some a, b ∈ Γɛ and t ∈ Γ*. This condition states that M moves properly according to the state, stack, and next input symbol. 3. rm ∈ F. This condition states that an accept state occurs at the input end.

What is the formal definition of computation for a DFA?

Let M = (Q, Σ, δ, q0, F) be a DFA and w = w1 w2 . . .wn be a string over Σ. Then M accepts w if a sequence of states r0, r1, ..., rn exist in Q such that the following hold: 1. r0 = q0 2. δ(ri, wi+1) = ri+1 for i = 0, 1, . . . , n − 1 3. rn ∈ F Condition #1 says that the machine starts in the start state. Condition #2 says that the machine goes from state to state according to the transition function. Condition #3 says that the machine accepts its input if it ends up in an accept state. We say that M recognizes the language A if A = { w | M accepts w }

What is the formal definition of computation for an NFA?

Let N = (Q, Σ, δ, q0, F) be an NFA and w = y1 y2 . . .ym be a string over Σ. Then N accepts w if a sequence of states r0, r1, ..., rm exist in Q such that the following hold: 1. r0 = q0 2. ri+1 ∈ δ(ri, yi+1) for i = 0, 1, . . . , m − 1 3. rm ∈ F Condition #1 says that the machine starts in the start state. Condition #2 says that the state ri+1 is one of the allowable next states when N is in state ri and reading yi+1. Observe that δ(ri, yi+1) is the set of allowable next states and so we say that ri+1 is a member of that set. Condition #3 says that the machine accepts its input if it if the last state is an accept state. We say that N recognizes the language A if A = { w | N accepts w }

What is the proof idea for a pumping lemma for CFLs?

Let s ∈ A, so that s is "sufficiently long." Since s ∈ A, s has a parse tree because it is derivable from G. The idea is that the parse tree is tall because s is long. So the tree must have some long path from the root to some terminal symbol (some leaf). On this long path, some variable R must be repeating by the pigeonhole principle. This repetition allows us to replace the subtree under the second occurrence of R and still get a legal parse tree. Therefore, we may cut s into five pieces uvxyz, and we may repeat the second and fourth pieces and obtain a string still in the language. In other words, uv^ixy^iz is in A for any i >= 0.

D2.2: What is the formal definition of a context-free grammar?

Note: Languages of CFGs are called context-free languages. Pushdown automata (PDAs) recognize CFLs.

D1.37: What is the formal definition of an NFA?

Note: P(Q) is the power set of Q

T1.49: The class of regular languages is __________ under the star operation. What is the proof idea for it?

Proof Idea: Add a new start state, which also is an accept state, and which has an ɛ arrow to the old start state. This solution has the desired effect of adding ɛ to the language without adding anything else.

T1.39: Every NFA has an equivalent DFA. What is the proof idea?

Proof Idea: Convert the NFA into a DFA. Let M = (Q, Σ, δ, q0, F) be an NFA with k states and M' = (Q', Σ, δ', q0', F') be the equivalent DFA with 2^k states, where: Q' = P(Q) q0' = E({ q0 }) F' = { R ∈ Q' | R contains an accept state from the NFA } δ'(R, a) = { q ∈ Q | q ∈ E(δ(r, a)) } for some r ∈ R Note: Also need to define epsilon-closure. E(R) = { q ∈ Q | q can be reached from a state in R by following 0 or more ɛ-edges.

What languages can PDAs recognize?

Regular languages and some non-regular languages

What makes a Turing machine different from finite automata?

TMs: 1. Uses an infinite/unlimited tape as its memory. 2. Has accept and reject states, but the machine may go on forever, never halting. 3. A TM can read and write on its tape. 4. The read/write tape head can move both left and right. 5. The special states accept and reject take effect immediately.

What is the objective of studying decidability and unsolvability?

To explore the limits of algorithmic solvability

T5.28: If A <=m B and B is Turing-recognizable, then A is __________ __________. What is the proof for this?

Turing-recognizable Proof Idea: Same as in Theorem 5.22 for deciders

C5.29: If A <=m B and A is not Turing-recognizable, then B is not __________ __________. What is an application of this?

Turing-recognizable Typical application: A mapping reduction f: A <=m B is also a mapping reduction f: co-A <=m co-B. So, since we know that co-ATM is not Turing-recognizable, we can show that a problem P is not Turing-recognizable by designing a mapping reduction f: ATM <=m co-P. (This will also imply co-ATM <=m P).

T5.30: EQTM = {<M1, M2> | M1 and M2 are TMs and L(M1) = L(M2) is neither __________ __________ nor __________ __________. What is the proof for this?

Turing-recognizable co-Turing recognizable Proof Idea: EQTM is not Turing-recognizable. ATM reduces to co-EQTM. F = "On input <M, w> where M is a TM and w is a string: 1. Construct the following two machines: M1 = "On any input, reject." M2 = "On any input: Run M on w. If M accepts, accept." 2. Output <M1, M2>" M2 accepts nothing. ∑* = L(M2) != L(M1) = ∅, if M accepts w L(M2) = L(M1) = ∅, if M does not accept w So, M1 and M2 are equivalent in the second case => F reduces ATM to co-EQTM. co-EQTM is not Turing-recognizable. ATM reduces to EQTM. G = "On input <M, w> where M is a TM and w is a string: 1. Construct the following two machines: M1 = "On any input, accept." M2 = "On any input, run M on w. If M accepts, accept." 2. Output <M1, M2>" M1 accepts everything. This time: M accepts w => L(M2) = L(M1) M does not accept w => L(M2) != L(M1) So, M1 and M2 are equivalent in the first case (M accepts w). => G reduces ATM to EQTM.

D1.23: Let A and B be languages. What are the regular operations of union, concatenation, and star?

Union: A ∪ B = { x | x ∈ A or x ∈ B } Concatenation: A ο B = { xy | x ∈ A and y ∈ B } Star: A* = { x1 x2 ... xk | k ≥ 0 and each xi ∈ A }

What is a countable set?

a finite set or a set that has the same cardinality as N

Nondeterminism

a parallel computation with multiple independent processes running in parallel; looks like a tree of possibilities

T1.25: The class of regular languages is __________ under the union operation. What is the proof for it?

closed Proof Idea: Construct M from M1 (that recognizes A1) and M2 (that recognizes A2). Simulate both M1 and M2 simultaneously and accept if either of the simulations accept. Note that we can't "rewind the tape."

T1.26: The class of regular languages is __________ under the concatenation operation. What is the proof for it?

closed Proof Idea: We need an NFA because the machine doesn't know where to break the given string w into two pieces, w1 (for M1) and w2 (for M2).

D5.17: A function f: ∑* -> ∑* is a __________ function if some TM M, on every input w, halts with just f(w) on its tape

computable

C2.32: Every regular language is also a __________ language.

context-free

L2.27: If a PDA recognizes some language, then it is __________. What is the proof idea?

context-free Proof Idea: We have a PDA P, and we want to make a CFG G that generates all the strings that P accepts. In other words, G should generate a string if that string causes the PDA to go from its start state to an accept state. To achieve this outcome we design a grammar that does somewhat more. For each pair of states p and q in P the grammar will have a variable Apq. This variable generates all the strings that can take P from p with an empty stack to q with an empty stack. Observe that such strings can also take P from p to q, regardless of the stack contents at p, leaving the stack at q in the same condition as it was at p. Modify P slightly so it: 1. Has a single accept state, qaccept. 2. It empties the stack before accepting. 3. Each transition either pushes a symbol onto the stack (a push move) or pops one off the stack (a pop move), but it does not do both at the same time. We do this by replacing each transition that simultaneously pops and pushes with a two-transition sequence that goes through a new state, and we replace each transition that neither pops nor pushes with a two-transition sequence that pushes then pops an arbitrary stack symbol.

How do you demonstrate equal cardinality of infinite sets?

create a bijection/correspondence between the two sets

T4.5: EQDFA = {<A, B> | A and B are DFAs and L(A) = L(B)} is a __________ language. What is the proof for it?

decidable Proof Idea: Construct a DFA C that accepts the symmetric difference L(C) = L(A) △ L(B) = (A - B) ∪ (B - A) SymmetricDifference[a_, b_] := Union[Complement[a, b], Complement[b, a]] Now use Theorem 4.4 and check if L(C) is empty.

T4.4: EDFA = {<A> | A is a DFA and L(A) = ∅} is a __________ language. What is the proof for it?

decidable Proof Idea: Create a TM T that propagates marking labels. T = "On input <A> where A is a DFA: 1. Mark the start state of A. 2. Repeat until no new states get marked: 3. Mark any state that has a transition coming into it from any state that is already marked. 4. If no accept state is marked, accept; otherwise, reject.

T4.8: ECFG is a __________ language. What is the proof for it?

decidable Proof Idea: For each variable determine if we can generate some string. Start from terminals and mark them. Then mark variables that can yield such terminals. Then mark vars that can yield the previous vars. In other words, if A -> U1 U2 ... Uk is a rule of G, mark A when each of U1, U2, ..., Uk has been marked. So in the end, if the start variable is not marked, accept; otherwise, reject.

T5.22: If A <=m B and B is __________, then A is decidable. What is the proof for this?

decidable Proof Idea: Let M be a decider for B. Let f be the reduction from A to B. Construct a decider N for A as follows. N = "On input w: 1. Compute f(w) 2. Run M on input f(w) and output whatever M outputs." w ∈ A => f(w) ∈ B => M accepts f(w) whenever w ∈ A => N works as expected

T4.4: Every CFL is __________. What is the proof idea?

decidable Proof Idea: Use Theorem 4.7. In other words, let G be a CFG for language A. Then, on input w, use a TM MG that feeds <G, w> into the TM S of Theorem 4.7. Return whatever S returns. Note: Simulation a nondeterministic PDA is a bad idea because some branches of the computation may go on forever. Then, we would not have a decider. So instead, first convert the PDA into a CFG G.

T4.3: ARFX = {<R, w> | R is a regular expression that generates string w} is a __________ language. What is the proof for it?

decidable Proof Idea: Convert R to an NFA A using the conversion from Theorem 1.54. Now use the TM N from the previous theorem and return the answer obtained on input <A, w>.

T4.2: ANFA = {<B, w> | B is an NFA that accepts input string w} is a __________ language. What is the proof for it?

decidable Proof Idea: Use the powerset construction and transform the given (assuming valid) NFA N to a DFA C. Then we use the TM M from the previous theorem and decide the input <C, w>. Return the answer that we obtain from M. M = "On input <C, w>, where C is a DFA and w is a string: 1. Simulate B on input w. 2. If the simulation ends in an accept state, accept. If it ends in a non-accepting state, reject."

T4.1: ADFA = {<B, w> | B is a DFA that accepts input string w} is a __________ language. What is the proof for it?

decidable Proof Idea: We need a TM M that decides ADFA. M = "On input <B, w>, where B is a DFA and w is a string: 1. Simulate B on input w. 2. If the simulation ends in an accept state, accept. If it ends in a non-accepting state, reject."

T4.7: ACFG = {<G, w> | G is a CFG that generates string w} is a __________ language. What is the proof for it?

decidable Proof Idea: It does not work to try all possible derivations, because there are infinitely many. S = "On input <G, w>, where G is a CFG and w is a string: 1. Convert G to an equivalent grammar in CNF. 2. Every CFG in CNF will generate w in 2n-1 steps, where |w| = n and the provided w can be generated by G. So, list all derivations with 2n-1 steps, where n is the length of w, except if n=0, then instead list all derivations with 1 step. 3. If any of those derivations generate w, accept; otherwise, reject."

D3.6: A language is Turing-decidable if some Turing machine __________ it.

decides

T3.16: Every nondeterministic TM has an equivalent __________TM. What is the proof idea?

deterministic Proof Idea: Do a breadth-first search (BFS) on the configuration tree that is produced when looking at all possible branches of computation. Regarding a multi-tape TM with three tapes (input tape, simulation tape, address tape), the address tape's sequences of numbers indicates which choices we make in every branch so that we reach a particular configuration in the computation.

T3.21: A language is Turing-recognizable iff some enumerator __________ it. What is the proof for it?

enumerates Proof Idea: Let s1, s2, s3, ... be a list of all strings in ∑*. Then construct the enumerator as follows: 1. Ignore the input 2. For i = 1, 2, 3, .... do: Run M for i steps on each input s1, s2, ..., si if any computations accept, print out the corresponding si. On input w for a TM M, do: Run E. Compare each output string of E with w. If w ever appears in the output of E, accept.

The intuitive notion of algorithms __________ Turing machine algorithms.

equals

Claim1.65: For any GNFA G, CONVERT(G) is __________ to G. What is the proof idea for this?

equivalent Proof Idea: Proof by induction. The induction hypothesis states that when the alogorithm calls on itself recursively on input G', the result is a regular expression that is equivalent to G' (since G' has k-1 states). Hence this regular expression is also equivalent to G and the algorithm is proved correct.

Given two sets A and B, a function f:A->B has bijection/correspondence if:

every element of A maps to a unique element of B and vice versa; a function that is both one-to-one and onto

Given two sets A and B, a function f:A->B is one-to-one if:

f(a) = f(b) <=> a = b the function f never maps two different elements to the same place

What is the simplest computational model?

finite state machine or finite automaton; DFA

When a language is inherently ambiguous:

it can only be generated by ambiguous grammars Ex: {a^i b^j c^k | i = j or j = k}

D2.7: A string w is derived ambiguously in CFG G if:

it has two or more different leftmost derivations. Grammar G is ambiguous if it generates some string ambiguously.

Nonregular languages are:

languages that cannot be recognized by any finite state automaton

D5.20: A language A is __________ __________ to language B, written A <=m B, if there is a computable function f: ∑* -> ∑*, where for every w, w ∈ A <=> f(w) ∈ B. The function f is called the ___________ of A to B.

mapping reducible reduction

E5.27: In Theorem 5.2 we reduced ATM to co-ETM. Since M accepts w iff L(M1) is not empty. So f is a mapping reduction from ATM to co-ETM. In fact, ___________ __________ from ATM to ETM exists (E5.5).

no reduction

Is EQCFG = {<G, H> | G and H are CFGs and L(G) = L(H)} decidable?

no, CFLs are not closed under complementation or intersection

T2.20: A language is context-free iff some __________ __________ recognizes it.

pushdown automaton

L2.21: If a language is context-free, then some __________ __________ recognizes it. What is the proof idea?

pushdown automaton Proof Idea: Let A be a CFL. By definition of a CFL, we know that A also has a CFG, G, generating it. We can convert G into an equivalent PDA, which we call P. P will try to "guess" a derivation from the grammar G that gives input w so that we can use the stack in this direction we match terminals as they are generated or read from the top of the stack and only store part of the intermediate string.

A language is co-Turing recognizable if it is the complement of a Turing-___________ language.

recognizable

C4.23: co-ATM is not Turing-__________. What is the proof for it?

recognizable Proof Idea: If co-ATM was recognizable (T4.22) then ATM is decidable, but this is not the case because it contradicts the undecidability of ATM.

4.2: The Acceptance Problem: ATM = {<M, w> | M is a TM and M accepts w} is Turing-____________.

recognizable U = "On input <M, w>, where M is a TM and w is a string: 1. Simulate M on input w 2. If M ever enters its accept state, accept; if M ever enters its reject state, reject."

D3.5: A language is Turing-recognizable if some Turing machine __________ it.

recognizes

Given a language of machine M, L(M) = A, we say that L __________ A or L __________ A.

recognizes, accepts

L1.55: If a language is described by a regular expression, then it is __________.

regular

L1.60: If a language is regular, then it is described by a __________ __________. What is the proof idea?

regular expression Proof Idea: Let A be a regular language. By definition of a regular language, there is a DFA that recognizes A. Then, we can convert the DFA into a GNFA, and the GNFA into a regular expression.

E5.26: In Theorem 5.4 we reduced ETM to EQTM. There the reduction of f maps <M> to <M, M1>, where M1 is the machine that __________ all inputs.

rejects

C1.40: A language is regular iff:

some NFA recognizes it

C1.19: A language is Turing-decidable iff:

some nondeterministic TM decides it

C1.18: A language is Turing-recognizable iff:

some nondeterministic TM recognizes it

T1.54: A language is regular iff:

some regular expression describes it

2.2: Pushdown automata are like NFAs, but they have a __________.

stack

What is a language enumerated by an enumerator E?

the collection of all the strings that it eventually prints out

Two machines are equivalent if:

they recognize the same language

T4.17: The set of real numbers R is __________.

uncountable

T5.15: PCP = { <P> | P is an instance of the post correspondence problem with a match} is __________. What is the proof for it?

undecidable Proof Idea: Reduction from ATM to PCP via accepting computation histories

T5.4: EQTM = {<M1, M2> | M1 and M2 are TMs and L(M1) = L(M2)} is __________. What is the proof for it?

undecidable Proof Idea: Towards contradiction, assume EQTM is decidable by some TM R. Reduce ETM to EQTM. ETM is the problem of determining whether the language of a TM is empty. EQTM is the problem of determining whether the languages of two TMs are the same. If one of these languages happens to be ∅, we end up with the problem of determining whether the language of the other machine is empty - that is, the ETM problem. We let TM R decide EQTM and construct TM S to decide ETM as follows. S = "On input <M>, where M is a TM: 1. Run R on <M, M1>, where M1 is a TM that rejects all inputs. 2. If R accepts, accept. If R rejects, reject." If R decides EQTM, S decides ETM. But ETM is undecidable, which is a contradiction.

T5.2: ETM = {<M> | M is a TM and L(M) = ∅ } is __________. What is the proof for it?

undecidable Proof Idea: Towards contradiction, assume ETM is decidable by some TM R. We want to reduce ATM to ETM. (Technically, we will reduce ATM to co-ETM since M accepts w if L(M1) != ∅.) For the critical input <M, w> of ATM, construct another TM M1. M1 = "On input x: 1. If x != w, reject. 2. If x = w, run M on input w and accept if M accepts." So, L(M1) is either ∅ or {w}. Check with R which case is true. Construct another TM S such that: S = "On input <M, w>, an encoding of a TM M and a string w: 1. Use the description of M and w to construct the TM M1. 2. Run R on input <M1> 3. If R accepts, reject; if R rejects, accept." If R were a decider for ETM, S would be a decider for ATM. This is impossible, so we reach a contradiction.

T5.1: HALTTM = {<M, w> | M is a TM and M halts on input w} is __________. What is the proof for it?

undecidable Proof Idea: Towards contradiction, assume HALTTM is decidable. Then, there is a TM R that decides HALTTM. If R indicates that M doesn't halt on w, reject because <M, w> isn't in ATM. However, if R indicates that M does halt on w, you can do the simulation without any danger of looping. Thus, if TM R exists, we can decide ATM, but we know that ATM is undecidable. By virtue of this contradiction we can conclude that R does not exist. Therefore HALTTM is undecidable. If we want to obtain a contradiction that TM R decides HALTTM, we now reduce ATM to HALTTM. Let S be a TM such that, on input <M, w>, it does the following: 1. Run R on <M, w> 2. If R rejects, reject. 3. If R accepts, simulate M on w until it halts. 4. If M has accepted, accept; if M has rejected, reject." Clearly, if R decides HALTTM, then S decides ATM. However, this is a contradiction, because it is impossible to have such an S that decides ATM, so HALTTM must also be undecidable.

5.28: Rice's Theorem Let P be a property about the language recognizes by a Turing machine such that: (i) P is non-trivial. Some, but not all TMs recognize a language that has this property. (ii) Whenever L(M1) = L(M2) for two TMs M1 and M2, then both L(M1) and L(M2) or none of them have the property P. Then the language LP = (<M> | M is a TM such that L(M) recognized has property P} is __________.

undecidable Proof Idea: Towards contradiction, suppose LP is decidable by some TM R. By condition (i), since P is a non-trivial property for some language, there are TMs MP and co-MP such that: L(MP) has the property P and L(co-MP) does not have the property P. We assume that we have access to such TMs. We will now reduce ATM to LP. Eventually our argument depends on whether or not the empty language L(∅) has the property P or not. We construct a TM S such that: L(S) = L(∅) if M does not accept w L(MP) if M accepts w and L∅ does not have the property P L(co-MP) if M accepts w and L∅ does have the property P We feed the description of S, <S>, into R, and R tells us if L(S) has the property. Based on the relationship of L(S) with L∅, we can tell if M accepts w, but this is a contradiction. Proof: The case where L∅ = ∅ does not have the property P. S = "On input x: 1> For the moment ignore x and simulate M on input w until M accepts so that we can go to step 2. (So if M rejects, loop and try again, ...forever...) 2. We reached this point because M accepted w. Simulate MP on x if L∅ does not have the property P. Otherwise, simulate co-MP on x (L∅ has the property P this time). Accept x if the simulation by MP or co-MP accepted. In other words, M accepts w if L(S) = L(MP) or L(S) = L(co-MP) M does not accept w if L(S) = L(∅) Feed <S> into R. R(<S>) accepts if L∅ has property P, which means L(S) = L∅, or if M did not accept w; for the problem ATM answer accept on input (M, w). R(<S>) rejects if L∅ does not have property P, or if M accepted w; for the problem ATM answer reject on input (M, w).

T4.11: ATM = {<M, w> | M is a TM and M accepts w} is __________. What is the proof idea?

undecidable Towards contradiction, assume ATM is decidable, with the TM H being a decider for ATM. H<M, w> = halts and accepts, if M accepts w halts and rejects, if M does not accept w Now look at the following TM D with H as a subroutine. D calls H to determine what M does when the input to M is its own description <M>. Once D has determined this information, it does the opposite. D = "On input <M>, where M is a TM: 1. Run H on input <M, <M>> 2. Output the opposite of what H outputs." So D(<M>) = accept, if M does not accept <M> reject, if M accepts <M> What about D(<D>)? We have: D(<D>) = accept, if D does not accept <D> reject, if D accepts <D> No matter what D does, it is forced to do the opposite, which is obviously a contradiction, so neither D nor H can exist!


Ensembles d'études connexes

Algebra II 5.08: Simplify Complex Fractions

View Set

Creating Video Games and Simulations

View Set

Credit Scores Credit Reports and Identity Theft

View Set

Section 1.8 Intro to Linear Transformations

View Set

Chapter 43: Assessment and Management of Patients With Hepatic Disorders

View Set

Chpt 5. Integumentary System (midterm)

View Set

Ch. 5 - Pharmacology and Adults & Geriatrics

View Set