Recall definition: \(A\) is mapping reducible to \(B\) means there is a computable function \(f : \Sigma^* \to \Sigma^*\) such that for all strings \(x\) in \(\Sigma^*\), \[x \in A \qquad \qquad \text{if and only if} \qquad \qquad f(x) \in B.\] Notation: when \(A\) is mapping reducible to \(B\), we write \(A \leq_m B\).
Theorem (Sipser 5.23): If \(A \leq_m B\) and \(A\) is undecidable, then \(B\) is undecidable.
Last time we proved that \(A_{TM} \le_m HALT_{TM}\) where \[HALT_{TM} = \{ \langle M, w \rangle \mid \text{$M$ is a Turing machine, $w$ is a string, and $M$ halts on $w$} \}\] and since \(A_{TM}\) is undecidable, \(HALT_{TM}\) is also undecidable. The function witnessing the mapping reduction mapped strings in \(A_{TM}\) to strings in \(HALT_{TM}\) and strings not in \(A_{TM}\) to strings not in \(HALT_{TM}\) by changing encoded Turing machines to ones that had identical computations except looped instead of rejecting.
True or False: \(\overline{A_{TM}} \leq_m \overline{HALT_{TM}}\)
True or False: \(HALT_{TM} \leq_m A_{TM}\).
Proof: Need computable function \(F: \Sigma^* \to \Sigma^*\) such that \(x \in HALT_{TM}\) iff \(F(x) \in A_{TM}\). Define
\(F = ``\) On input \(x\),
Type-check whether \(x = \langle M, w \rangle\) for some TM \(M\) and string \(w\). If so, move to step 2; if not, output \(\langle \hspace{2in} \rangle\)
Construct the following machine \(M'_x\):
Output \(\langle M'_x , w\rangle\)."
Verifying correctness: (1) Is function well-defined and computable? (2) Does it have the translation property \(x \in HALT_{TM}\) iff its image is in \(A_{TM}\)?
Input string | Output string |
---|---|
\(\langle M, w \rangle\) where \(M\) halts on \(w\) | |
\(\langle M, w \rangle\) where \(M\) does not halt on \(w\) | |
\(x\) not encoding any pair of TM and string | |
Theorem (Sipser 5.28): If \(A \leq_m B\) and \(B\) is recognizable, then \(A\) is recognizable.
Proof:
Corollary: If \(A \leq_m B\) and \(A\) is unrecognizable, then \(B\) is unrecognizable.
Strategy:
(i) To prove that a recognizable language \(R\) is undecidable, prove that \(A_{TM} \leq_m R\).
(ii) To prove that a co-recognizable language \(U\) is undecidable, prove that \(\overline{A_{TM}} \leq_m U\), i.e. that \(A_{TM} \leq_m \overline{U}\).
\[E_{TM} = \{ \langle M \rangle \mid \text{$M$ is a Turing machine and $L(M) = \emptyset$} \}\]
Can we find algorithms to recognize
\(E_{TM}\) ?
\(\overline{E_{TM}}\) ?
Claim: \(A_{TM} \leq_m \overline{E_{TM}}\). And hence also \(\overline{A_{TM}} \leq_m E_{TM}\)
Proof: Need computable function \(F: \Sigma^* \to \Sigma^*\) such that \(x \in A_{TM}\) iff \(F(x) \notin E_{TM}\). Define
\(F = ``\) On input \(x\),
Type-check whether \(x = \langle M, w \rangle\) for some TM \(M\) and string \(w\). If so, move to step 2; if not, output \(\langle \hspace{2in} \rangle\)
Construct the following machine \(M'_x\):
Output \(\langle M'_x \rangle\)."
Verifying correctness: (1) Is function well-defined and computable? (2) Does it have the translation property \(x \in A_{TM}\) iff its image is not in \(E_{TM}\) ?
Input string | Output string |
---|---|
\(\langle M, w \rangle\) where \(w \in L(M)\) | |
\(\langle M, w \rangle\) where \(w \notin L(M)\) | |
\(x\) not encoding any pair of TM and string | |
Recall: \(A\) is mapping reducible to \(B\), written \(A \leq_m B\), means there is a computable function \(f : \Sigma^* \to \Sigma^*\) such that for all strings \(x\) in \(\Sigma^*\), \[x \in A \qquad \qquad \text{if and only if} \qquad \qquad f(x) \in B.\]
So far:
\(A_{TM}\) is recognizable, undecidable, and not-co-recognizable.
\(\overline{A_{TM}}\) is unrecognizable, undecidable, and co-recognizable.
\(HALT_{TM}\) is recognizable, undecidable, and not-co-recognizable.
\(\overline{HALT_{TM}}\) is unrecognizable, undecidable, and co-recognizable.
\(E_{TM}\) is unrecognizable, undecidable, and co-recognizable.
\(\overline{E_{TM}}\) is recognizable, undecidable, and not-co-recognizable.
\[EQ_{TM} = \{ \langle M, M' \rangle \mid \text{$M$ and $M'$ are both Turing machines and $L(M) =L(M')$} \}\]
Can we find algorithms to recognize
\(EQ_{TM}\) ?
\(\overline{EQ_{TM}}\) ?
Goal: Show that \(EQ_{TM}\) is not recognizable and that \(\overline{EQ_{TM}}\) is not recognizable.
Using Corollary to Theorem 5.28: If \(A \leq_m B\) and \(A\) is unrecognizable, then \(B\) is unrecognizable, it’s enough to prove that
\(\overline{HALT_{TM}} \leq_m EQ_{TM}\) aka \(HALT_{TM} \leq_m \overline{EQ_{TM}}\)
\(\overline{HALT_{TM}} \leq_m \overline{EQ_{TM}}\) aka \(HALT_{TM} \leq_m EQ_{TM}\)
Need computable function \(F_1: \Sigma^* \to \Sigma^*\) such that \(x \in HALT_{TM}\) iff \(F_1(x) \notin EQ_{TM}\).
Strategy:
Map strings \(\langle M, w \rangle\) to strings \(\langle M'_{x}, \scalebox{0.5}{\begin{tikzpicture}[->,>=stealth',shorten >=1pt, auto, node distance=2cm, semithick] \tikzstyle{every state}=[text=black, fill=yellow!40] \node[initial,state] (q0) {$q_0$}; \node[state,accepting] (qacc) [right of = q0, xshift = 20]{$q_{acc}$}; \path (q0) edge [loop above] node {$0, 1, \scalebox{1.5}{\textvisiblespace} \to R$} (q0) ; \end{tikzpicture}} \rangle\) . This image string is not in \(EQ_{TM}\) when \(L(M'_x) \neq \emptyset\).
We will build \(M'_x\) so that \(L(M'_{x}) = \Sigma^*\) when \(M\) halts on \(w\) and \(L(M'_x) = \emptyset\) when \(M\) loops on \(w\).
Thus: when \(\langle M,w \rangle \in HALT_{TM}\) it gets mapped to a string not in \(EQ_{TM}\) and when \(\langle M,w \rangle \notin HALT_{TM}\) it gets mapped to a string that is in \(EQ_{TM}\).
Define
\(F_1 = ``\) On input \(x\),
Type-check whether \(x = \langle M, w \rangle\) for some TM \(M\) and string \(w\). If so, move to step 2; if not, output \(\langle \hspace{2in} \rangle\)
Construct the following machine \(M'_x\):
Output \(\langle M'_{x}, \scalebox{0.5}{\begin{tikzpicture}[->,>=stealth',shorten >=1pt, auto, node distance=2cm, semithick] \tikzstyle{every state}=[text=black, fill=yellow!40] \node[initial,state] (q0) {$q_0$}; \node[state,accepting] (qacc) [right of = q0, xshift = 20]{$q_{acc}$}; \path (q0) edge [loop above] node {$0, 1, \scalebox{1.5}{\textvisiblespace} \to R$} (q0) ; \end{tikzpicture}} \rangle\) "
Verifying correctness: (1) Is function well-defined and computable? (2) Does it have the translation property \(x \in HALT_{TM}\) iff its image is not in \(EQ_{TM}\) ?
Input string | Output string |
---|---|
\(\langle M, w \rangle\) where \(M\) halts on \(w\) | |
\(\langle M, w \rangle\) where \(M\) loops on \(w\) | |
\(x\) not encoding any pair of TM and string | |
Conclude: \(HALT_{TM} \leq_m \overline{EQ_{TM}}\)
Need computable function \(F_2: \Sigma^* \to \Sigma^*\) such that \(x \in HALT_{TM}\) iff \(F_2(x) \in EQ_{TM}\).
Strategy:
Map strings \(\langle M, w \rangle\) to strings \(\langle M'_{x}, \scalebox{0.5}{\begin{tikzpicture}[->,>=stealth',shorten >=1pt, auto, node distance=2cm, semithick] \tikzstyle{every state}=[text=black, fill=yellow!40] \node[initial,state,accepting] (q0) {$q_0$}; ; \end{tikzpicture}} \rangle\) . This image string is in \(EQ_{TM}\) when \(L(M'_x) = \Sigma^*\).
We will build \(M'_x\) so that \(L(M'_{x}) = \Sigma^*\) when \(M\) halts on \(w\) and \(L(M'_x) = \emptyset\) when \(M\) loops on \(w\).
Thus: when \(\langle M,w \rangle \in HALT_{TM}\) it gets mapped to a string in \(EQ_{TM}\) and when \(\langle M,w \rangle \notin HALT_{TM}\) it gets mapped to a string that is not in \(EQ_{TM}\).
Define
\(F_2 = ``\) On input \(x\),
Type-check whether \(x = \langle M, w \rangle\) for some TM \(M\) and string \(w\). If so, move to step 2; if not, output \(\langle \hspace{2in} \rangle\)
Construct the following machine \(M'_x\):
Output \(\langle M'_{x}, \scalebox{0.5}{\begin{tikzpicture}[->,>=stealth',shorten >=1pt, auto, node distance=2cm, semithick] \tikzstyle{every state}=[text=black, fill=yellow!40] \node[initial,state,accepting] (q0) {$q_0$}; ; \end{tikzpicture}} \rangle\) "
Verifying correctness: (1) Is function well-defined and computable? (2) Does it have the translation property \(x \in HALT_{TM}\) iff its image is in \(EQ_{TM}\) ?
Input string | Output string |
---|---|
\(\langle M, w \rangle\) where \(M\) halts on \(w\) | |
\(\langle M, w \rangle\) where \(M\) loops on \(w\) | |
\(x\) not encoding any pair of TM and string | |
Conclude: \(HALT_{TM} \leq_m EQ_{TM}\)
Two models of computation are called equally expressive when every language recognizable with the first model is recognizable with the second, and vice versa.
True / False: NFAs and PDAs are equally expressive.
True / False: Regular expressions and CFGs are equally expressive.
Church-Turing Thesis (Sipser p. 183): The informal notion of algorithm is formalized completely and correctly by the formal definition of a Turing machine. In other words: all reasonably expressive models of computation are equally expressive with the standard Turing machine.
Some examples of models that are equally expressive with deterministic Turing machines:
The May-stay machine model is the same as the usual Turing machine model, except that on each transition, the tape head may move L, move R, or Stay.
Formally: \((Q, \Sigma, \Gamma, \delta, q_0, q_{accept}, q_{reject})\) where \[\delta: Q \times \Gamma \to Q \times \Gamma \times \{L, R, S\}\]
Claim: Turing machines and May-stay machines are equally expressive. To prove …
To translate a standard TM to a may-stay machine: never use the direction \(S\)!
To translate one of the may-stay machines to standard TM: any time TM would Stay, move right then left.
A multitape Turing macihne with \(k\) tapes can be formally representated as \((Q, \Sigma, \Gamma, \delta, q_0, q_{acc}, q_{rej})\) where \(Q\) is the finite set of states, \(\Sigma\) is the input alphabet with \(\textvisiblespace \notin \Sigma\), \(\Gamma\) is the tape alphabet with \(\Sigma \subsetneq \Gamma\) , \(\delta: Q\times \Gamma^k\to Q \times \Gamma^k \times \{L,R\}^k\) (where \(k\) is the number of states)
If \(M\) is a standard TM, it is a \(1\)-tape machine.
To translate a \(k\)-tape machine to a standard TM: Use a new symbol to separate the contents of each tape and keep track of location of head with special version of each tape symbol. Sipser Theorem 3.13
Enumerators give a different model of computation where a language is produced, one string at a time, rather than recognized by accepting (or not) individual strings.
Each enumerator machine has finite state control, unlimited work tape, and a printer. The computation proceeds according to transition function; at any point machine may “send” a string to the printer. \[E = (Q, \Sigma, \Gamma, \delta, q_0, q_{print})\] \(Q\) is the finite set of states, \(\Sigma\) is the output alphabet, \(\Gamma\) is the tape alphabet (\(\Sigma \subsetneq\Gamma, \textvisiblespace \in \Gamma \setminus \Sigma\)), \[\delta: Q \times \Gamma \times \Gamma \to Q \times \Gamma \times \Gamma \times \{L, R\} \times \{L, R\}\] where in state \(q\), when the working tape is scanning character \(x\) and the printer tape is scanning character \(y\), \(\delta( (q,x,y) ) = (q', x', y', d_w, d_p)\) means transition to control state \(q'\), write \(x'\) on the working tape, write \(y'\) on the printer tape, move in direction \(d_w\) on the working tape, and move in direction \(d_p\) on the printer tape. The computation starts in \(q_0\) and each time the computation enters \(q_{print}\) the string from the leftmost edge of the printer tape to the first blank cell is considered to be printed.
The language enumerated by \(E\), \(L(E)\), is \(\{ w \in \Sigma^* \mid \text{$E$ eventually, at finite time, prints $w$} \}\).
Theorem 3.21 A language is Turing-recognizable iff some enumerator enumerates it.
Proof, part 1: Assume \(L\) is enumerated by some enumerator, \(E\), so \(L = L(E)\). We’ll use \(E\) in a subroutine within a high-level description of a new Turing machine that we will build to recognize \(L\).
Goal: build Turing machine \(M_E\) with \(L(M_E) = L(E)\).
Define \(M_E\) as follows: \(M_E =\) “On input \(w\),
Run \(E\). For each string \(x\) printed by \(E\).
Check if \(x = w\). If so, accept (and halt); otherwise, continue."
Proof, part 2: Assume \(L\) is Turing-recognizable and there is a Turing machine \(M\) with \(L = L(M)\). We’ll use \(M\) in a subroutine within a high-level description of an enumerator that we will build to enumerate \(L\).
Goal: build enumerator \(E_M\) with \(L(E_M) = L(M)\).
Idea: check each string in turn to see if it is in \(L\).
How? Run computation of \(M\) on each string. But: need to be careful about computations that don’t halt.
Recall String order for \(\Sigma = \{0,1\}\): \(s_1 = \varepsilon\), \(s_2 = 0\), \(s_3 = 1\), \(s_4 = 00\), \(s_5 = 01\), \(s_6 = 10\), \(s_7 = 11\), \(s_8 = 000\), …
Define \(E_M\) as follows: \(E_{M} =\) “ ignore any input. Repeat the following for \(i=1, 2, 3, \ldots\)
Run the computations of \(M\) on \(s_1\), \(s_2\), …, \(s_i\) for (at most) \(i\) steps each
For each of these \(i\) computations that accept during the (at most) \(i\) steps, print out the accepted string."
At any point in the computation, the nondeterministic machine may proceed according to several possibilities: \((Q, \Sigma, \Gamma, \delta, q_0, q_{acc}, q_{rej})\) where \[\delta: Q \times \Gamma \to \mathcal{P}(Q \times \Gamma \times \{L, R\})\] The computation of a nondeterministic Turing machine is a tree with branching when the next step of the computation has multiple possibilities. A nondeterministic Turing machine accepts a string exactly when some branch of the computation tree enters the accept state.
Given a nondeterministic machine, we can use a \(3\)-tape Turing machine to simulate it by doing a breadth-first search of computation tree: one tape is “read-only” input tape, one tape simulates the tape of the nondeterministic computation, and one tape tracks nondeterministic branching. Sipser page 178
Summary
Two models of computation are called equally expressive when every language recognizable with the first model is recognizable with the second, and vice versa.
To prove the existence of a Turing machine that decides / recognizes some language, it’s enough to construct an example using any of the equally expressive models.
But: some of the performance properties of these models are not equivalent.
Acceptance problem | ||
for Turing machines | \(A_{TM}\) | \(\{ \langle M,w \rangle \mid \text{$M$ is a Turing machine that accepts input string $w$}\}\) |
Language emptiness testing | ||
for Turing machines | \(E_{TM}\) | \(\{ \langle M \rangle \mid \text{$M$ is a Turing machine and $L(M) = \emptyset$\}}\) |
Language equality testing | ||
for Turing machines | \(EQ_{TM}\) | \(\{ \langle M_1, M_2 \rangle \mid \text{$M_1$ and $M_2$ are Turing machines and $L(M_1) =L(M_2)$\}}\) |
3 \(M_1\)
\(M_2\)
\(M_3\)
Example strings in \(A_{TM}\)
Example strings in \(E_{TM}\)
Example strings in \(EQ_{TM}\)
Theorem: \(A_{TM}\) is Turing-recognizable.
Strategy: To prove this theorem, we need to define a Turing machine \(R_{ATM}\) such that \(L(R_{ATM}) = A_{TM}\).
Define \(R_{ATM} =\) “
Proof of correctness:
We will show that \(A_{TM}\) is undecidable. First, let’s explore what that means.
To prove that a computational problem is decidable, we find/ build a Turing machine that recognizes the language encoding the computational problem, and that is a decider.
How do we prove a specific problem is not decidable?
How would we even find such a computational problem?
Counting arguments for the existence of an undecidable language:
The set of all Turing machines is countably infinite.
Each recognizable language has at least one Turing machine that recognizes it (by definition), so there can be no more Turing-recognizable languages than there are Turing machines.
Since there are infinitely many Turing-recognizable languages (think of the singleton sets), there are countably infinitely many Turing-recognizable languages.
Such the set of Turing-decidable languages is an infinite subset of the set of Turing-recognizable languages, the set of Turing-decidable languages is also countably infinite.
Since there are uncountably many languages (because \(\mathcal{P}(\Sigma^*)\) is uncountable), there are uncountably many unrecognizable languages and there are uncountably many undecidable languages.
Thus, there’s at least one undecidable language!
What’s a specific example of a language that is unrecognizable or undecidable?
To prove that a language is undecidable, we need to prove that there is no Turing machine that decides it.
Key idea: proof by contradiction relying on self-referential disagreement.
Theorem: \(A_{TM}\) is not Turing-decidable.
Proof: Suppose towards a contradiction that there is a Turing machine that decides \(A_{TM}\). We call this presumed machine \(M_{ATM}\).
By assumption, for every Turing machine \(M\) and every string \(w\)
If \(w \in L(M)\), then the computation of \(M_{ATM}\) on \(\langle M,w \rangle ~~ \underline{\phantom{\hspace{2.5in}}}\)
If \(w \notin L(M)\), then the computation of \(M_{ATM}\) on \(\langle M,w \rangle ~~ \underline{\phantom{\hspace{2.5in}}}\)
Define a new Turing machine using the high-level description:
\(D =\)“ On input \(\langle M \rangle\), where \(M\) is a Turing machine:
Run \(M_{ATM}\) on \(\langle M, \langle M \rangle \rangle\).
If \(M_{ATM}\) accepts, reject; if \(M_{ATM}\) rejects, accept."
Is \(D\) a Turing machine?
Is \(D\) a decider?
What is the result of the computation of \(D\) on \(\langle D \rangle\)?
Definition: A language \(L\) over an alphabet \(\Sigma\) is called co-recognizable if its complement, defined as \(\Sigma^* \setminus L = \{ x \in \Sigma^* \mid x \notin L \}\), is Turing-recognizable.
Theorem (Sipser Theorem 4.22): A language is Turing-decidable if and only if both it and its complement are Turing-recognizable.
Proof, first direction: Suppose language \(L\) is Turing-decidable. WTS that both it and its complement are Turing-recognizable.
Proof, second direction: Suppose language \(L\) is Turing-recognizable, and so is its complement. WTS that \(L\) is Turing-decidable.
Notation: The complement of a set \(X\) is denoted with a superscript \(c\), \(X^c\), or an overline, \(\overline{X}\).
Mapping reduction
Motivation: Proving that \(A_{TM}\) is undecidable was hard. How can we leverage that work? Can we relate the decidability / undecidability of one problem to another?
If problem \(X\) is no harder than problem \(Y\)
…and if \(Y\) is easy,
…then \(X\) must be easy too.
If problem \(X\) is no harder than problem \(Y\)
…and if \(X\) is hard,
…then \(Y\) must be hard too.
“Problem \(X\) is no harder than problem \(Y\)” means “Can answer questions about membership in \(X\) by converting them to questions about membership in \(Y\)”.
Definition: \(A\) is mapping reducible to \(B\) means there is a computable function \(f : \Sigma^* \to \Sigma^*\) such that for all strings \(x\) in \(\Sigma^*\), \[x \in A \qquad \qquad \text{if and only if} \qquad \qquad f(x) \in B.\] Notation: when \(A\) is mapping reducible to \(B\), we write \(A \leq_m B\).
Intuition: \(A \leq_m B\) means \(A\) is no harder than \(B\), i.e. that the level of difficulty of \(A\) is less than or equal the level of difficulty of \(B\).
TODO
What is a computable function?
How do mapping reductions help establish the computational difficulty of languages?
Computable functions
Definition: A function \(f: \Sigma^* \to \Sigma^*\) is a computable function means there is some Turing machine such that, for each \(x\), on input \(x\) the Turing machine halts with exactly \(f(x)\) followed by all blanks on the tape
Examples of computable functions:
The function that maps a string to a string which is one character longer and whose value, when interpreted as a fixed-width binary representation of a nonnegative integer is twice the value of the input string (when interpreted as a fixed-width binary representation of a non-negative integer) \[f_1: \Sigma^* \to \Sigma^* \qquad f_1(x) = x0\]
To prove \(f_1\) is computable function, we define a Turing machine computing it.
High-level description
“On input \(w\)
1. Append \(0\) to \(w\).
2. Halt.”
Implementation-level description
“On input \(w\)
1. Sweep read-write head to the right until find first blank cell.
2. Write 0.
3. Halt.”
Formal definition \((\{q0, qacc, qrej\}, \{0,1\}, \{0,1,\textvisiblespace\},\delta, q0, qacc, qrej)\) where \(\delta\) is specified by the state diagram:
The function that maps a string to the result of repeating the string twice. \[f_2: \Sigma^* \to \Sigma^* \qquad f_2( x ) = xx\]
The function that maps strings that are not the codes of NFAs to the empty string and that maps strings that code NFAs to the code of a DFA that recognizes the language recognized by the NFA produced by the macro-state construction from Chapter 1.
The function that maps strings that are not the codes of Turing machines to the empty string and that maps strings that code Turing machines to the code of the related Turing machine that acts like the Turing machine coded by the input, except that if this Turing machine coded by the input tries to reject, the new machine will go into a loop. \[f_4: \Sigma^* \to \Sigma^* \qquad f_4( x ) = \begin{cases} \varepsilon \qquad&\text{if $x$ is not the code of a TM} \\ \langle (Q \cup \{q_{trap} \}, \Sigma, \Gamma, \delta', q_0, q_{acc}, q_{rej} ) \rangle \qquad&\text{if $x = \langle (Q, \Sigma, \Gamma, \delta, q_0, q_{acc}, q_{rej} )\rangle$}\end{cases}\] where \(q_{trap} \notin Q\) and \[\delta'( (q,x) ) = \begin{cases} (r,y,d) &\text{if $q \in Q$, $x \in \Gamma$, $\delta ((q,x)) = (r,y,d)$, and $r \neq q_{rej}$} \\ (q_{trap}, \textvisiblespace, R) & \text{otherwise} \end{cases}\]
Definition: \(A\) is mapping reducible to \(B\) means there is a computable function \(f : \Sigma^* \to \Sigma^*\) such that for all strings \(x\) in \(\Sigma^*\), \[x \in A \qquad \qquad \text{if and only if} \qquad \qquad f(x) \in B.\]
Making intutition precise …
Theorem (Sipser 5.22): If \(A \leq_m B\) and \(B\) is decidable, then \(A\) is decidable.
Theorem (Sipser 5.23): If \(A \leq_m B\) and \(A\) is undecidable, then \(B\) is undecidable.
Recall definition: \(A\) is mapping reducible to \(B\) means there is a computable function \(f : \Sigma^* \to \Sigma^*\) such that for all strings \(x\) in \(\Sigma^*\), \[x \in A \qquad \qquad \text{if and only if} \qquad \qquad f(x) \in B.\] Notation: when \(A\) is mapping reducible to \(B\), we write \(A \leq_m B\).
Intuition: \(A \leq_m B\) means \(A\) is no harder than \(B\), i.e. that the level of difficulty of \(A\) is less than or equal the level of difficulty of \(B\).
Example: \(A_{TM} \leq_m A_{TM}\)
Example: \(A_{DFA} \leq_m \{ ww \mid w \in \{0,1\}^* \}\)
Halting problem \[HALT_{TM} = \{ \langle M, w \rangle \mid \text{$M$ is a Turing machine, $w$ is a string, and $M$ halts on $w$} \}\]
Define \(F: \Sigma^* \to \Sigma^*\) by \[F(x) = \begin{cases} const_{out} \qquad &\text{if $x \neq \langle M,w \rangle$ for any Turing machine $M$ and string $w$ over the alphabet of $M$} \\ \langle M', w \rangle \qquad & \text{if $x = \langle M, w \rangle$ for some Turing machine $M$ and string $w$ over the alphabet of $M$.} \end{cases}\] where \(const_{out} = \langle \includegraphics[width=1.5in]{../../resources/machines/Lect22TM1.png} , \varepsilon \rangle\) and \(M'\) is a Turing machine that computes like \(M\) except, if the computation ever were to go to a reject state, \(M'\) loops instead.
\(F( \langle \includegraphics[width=2.5in]{../../resources/machines/Lect22TM2.png} , \varepsilon \rangle)\) =
To use this function to prove that \(A_{TM} \leq_m HALT_{TM}\), we need two claims:
Claim (1): \(F\) is computable
Claim (2): for every \(x\), \(x \in A_{TM}\) iff \(F(x) \in HALT_{TM}\).
In practice, computers (and Turing machines) don’t have infinite tape, and we can’t afford to wait unboundedly long for an answer. “Decidable" isn’t good enough - we want “Efficiently decidable".
For a given algorithm working on a given input, how long do we need to wait for an answer? How does the running time depend on the input in the worst-case? average-case? We expect to have to spend more time on computations with larger inputs.
A language is recognizable if
A language is decidable if
A language is efficiently decidable if
A function is computable if
A function is efficiently computable if
Definition (Sipser 7.1): For \(M\) a deterministic decider, its running time is the function \(f: \mathbb{N} \to \mathbb{N}\) given by \[f(n) = \text{max number of steps $M$ takes before halting, over all inputs of length $n$}\]
Definition (Sipser 7.7): For each function \(t(n)\), the time complexity class \(TIME(t(n))\), is defined by \[TIME( t(n)) = \{ L \mid \text{$L$ is decidable by a Turing machine with running time in $O(t(n))$} \}\]
An example of an element of \(TIME( 1 )\) is
An example of an element of \(TIME( n )\) is
Note: \(TIME( 1) \subseteq TIME (n) \subseteq TIME(n^2)\)
Definition (Sipser 7.12) : \(P\) is the class of languages that are decidable in polynomial time on a deterministic 1-tape Turing machine \[P = \bigcup_k TIME(n^k)\]
Compare to exponential time: brute-force search.
Theorem (Sipser 7.8): Let \(t(n)\) be a function with \(t(n) \geq n\). Then every \(t(n)\) time deterministic multitape Turing machine has an equivalent \(O(t^2(n))\) time deterministic 1-tape Turing machine.
Definition (Sipser 7.1): For \(M\) a deterministic decider, its running time is the function \(f: \mathbb{N} \to \mathbb{N}\) given by \[f(n) = \text{max number of steps $M$ takes before halting, over all inputs of length $n$}\]
Definition (Sipser 7.7): For each function \(t(n)\), the time complexity class \(TIME(t(n))\), is defined by \[TIME( t(n)) = \{ L \mid \text{$L$ is decidable by a Turing machine with running time in $O(t(n))$} \}\] Definition (Sipser 7.12) : \(P\) is the class of languages that are decidable in polynomial time on a deterministic 1-tape Turing machine \[P = \bigcup_k TIME(n^k)\]
Definition (Sipser 7.9): For \(N\) a nodeterministic decider. The running time of \(N\) is the function \(f: \mathbb{N} \to \mathbb{N}\) given by \[f(n) = \text{max number of steps $N$ takes on any branch before halting, over all inputs of length $n$}\]
Definition (Sipser 7.21): For each function \(t(n)\), the nondeterministic time complexity class \(NTIME(t(n))\), is defined by \[NTIME( t(n)) = \{ L \mid \text{$L$ is decidable by a nondeterministic Turing machine with running time in $O(t(n))$} \}\] \[NP = \bigcup_k NTIME(n^k)\]
True or False: \(TIME(n^2) \subseteq NTIME(n^2)\)
True or False: \(NTIME(n^2) \subseteq TIME(n^2)\)
Every problem in NP is decidable with an exponential-time algorithm
Nondeterministic approach: guess a possible solution, verify that it works.
Brute-force (worst-case exponential time) approach: iterate over all possible solutions, for each one, check if it works.
Examples in \(P\)
Can’t use nondeterminism; Can use multiple tapes; Often need to be “more clever” than naïve / brute force approach \[PATH = \{\langle G,s,t\rangle \mid \textrm{$G$ is digraph with $n$ nodes there is path from s to t}\}\] Use breadth first search to show in \(P\) \[RELPRIME = \{ \langle x,y\rangle \mid \textrm{$x$ and $y$ are relatively prime integers}\}\] Use Euclidean Algorithm to show in \(P\) \[L(G) = \{w \mid \textrm{$w$ is generated by $G$}\}\] (where \(G\) is a context-free grammar). Use dynamic programming to show in \(P\).
Examples in \(NP\)
“Verifiable" i.e. NP, Can be decided by a nondeterministic TM in polynomial time, best known deterministic solution may be brute-force, solution can be verified by a deterministic TM in polynomial time.
\[HAMPATH = \{\langle G,s,t \rangle \mid \textrm{$G$ is digraph with $n$ nodes, there is path from $s$ to $t$ that goes through every node exactly once}\}\] \[VERTEX-COVER = \{ \langle G,k\rangle \mid \textrm{$G$ is an undirected graph with $n$ nodes that has a $k$-node vertex cover}\}\] \[CLIQUE = \{ \langle G,k\rangle \mid \textrm{$G$ is an undirected graph with $n$ nodes that has a $k$-clique}\}\] \[SAT =\{ \langle X \rangle \mid \textrm{$X$ is a satisfiable Boolean formula with $n$ variables}\}\]
Problems in \(P\) | Problems in \(NP\) |
---|---|
(Membership in any) regular language | Any problem in \(P\) |
(Membership in any) context-free language | |
\(A_{DFA}\) | \(SAT\) |
\(E_{DFA}\) | \(CLIQUE\) |
\(EQ_{DFA}\) | \(VERTEX-COVER\) |
\(PATH\) | \(HAMPATH\) |
\(RELPRIME\) | \(\ldots\) |
\(\ldots\) |
Notice: \(NP \subseteq \{ L \mid L \text{ is decidable} \}\) so \(A_{TM} \notin NP\)
Million-dollar question: Is \(P = NP\)?
One approach to trying to answer it is to look for hardest problems in \(NP\) and then (1) if we can show that there are efficient algorithms for them, then we can get efficient algorithms for all problems in \(NP\) so \(P = NP\), or (2) these problems might be good candidates for showing that there are problems in \(NP\) for which there are no efficient algorithms.
Definition (Sipser 7.29) Language \(A\) is polynomial-time mapping reducible to language \(B\), written \(A \leq_P B\), means there is a polynomial-time computable function \(f: \Sigma^* \to \Sigma^*\) such that for every \(x \in \Sigma^*\) \[x \in A \qquad \text{iff} \qquad f(x) \in B.\] The function \(f\) is called the polynomial time reduction of \(A\) to \(B\).
Theorem (Sipser 7.31): If \(A \leq_P B\) and \(B \in P\) then \(A \in P\).
Proof:
Definition (Sipser 7.34; based in Stephen Cook and Leonid Levin’s work in the 1970s): A language \(B\) is NP-complete means (1) \(B\) is in NP and (2) every language \(A\) in \(NP\) is polynomial time reducible to \(B\).
Theorem (Sipser 7.35): If \(B\) is NP-complete and \(B \in P\) then \(P = NP\).
Proof:
3SAT: A literal is a Boolean variable (e.g. \(x\)) or a negated Boolean variable (e.g. \(\bar{x}\)). A Boolean formula is a 3cnf-formula if it is a Boolean formula in conjunctive normal form (a conjunction of disjunctive clauses of literals) and each clause has three literals. \[3SAT = \{ \langle \phi \rangle \mid \text{$\phi$ is a satisfiable 3cnf-formula} \}\]
Example string in \(3SAT\) \[\langle (x \vee \bar{y} \vee {\bar z}) \wedge (\bar{x} \vee y \vee z) \wedge (x \vee y \vee z) \rangle\]
Example string not in \(3SAT\) \[\langle (x \vee y \vee z) \wedge (x \vee y \vee{\bar z}) \wedge (x \vee \bar{y} \vee z) \wedge (x \vee \bar{y} \vee \bar{z}) \wedge (\bar{x} \vee y \vee z) \wedge (\bar{x} \vee y \vee{\bar z}) \wedge (\bar{x} \vee \bar{y} \vee z) \wedge (\bar{x} \vee \bar{y} \vee \bar{z}) \rangle\]
Cook-Levin Theorem: \(3SAT\) is \(NP\)-complete.
Are there other \(NP\)-complete problems? To prove that \(X\) is \(NP\)-complete
From scratch: prove \(X\) is in \(NP\) and that all \(NP\) problems are polynomial-time reducible to \(X\).
Using reduction: prove \(X\) is in \(NP\) and that a known-to-be \(NP\)-complete problem is polynomial-time reducible to \(X\).
CLIQUE: A \(k\)-clique in an undirected graph is a maximally connected subgraph with \(k\) nodes. \[CLIQUE = \{ \langle G, k \rangle \mid \text{$G$ is an undirected graph with a $k$-clique} \}\]
Example string in \(CLIQUE\)
Example string not in \(CLIQUE\)
Theorem (Sipser 7.32): \[3SAT \leq_P CLIQUE\]
Given a Boolean formula in conjunctive normal form with \(k\) clauses and three literals per clause, we will map it to a graph so that the graph has a clique if the original formula is satisfiable and the graph does not have a clique if the original formula is not satisfiable.
The graph has \(3k\) vertices (one for each literal in each clause) and an edge between all vertices except
vertices for two literals in the same clause
vertices for literals that are negations of one another
Example: \((x \vee \bar{y} \vee {\bar z}) \wedge (\bar{x} \vee y \vee z) \wedge (x \vee y \vee z)\)
Model of Computation | Class of Languages |
Deterministic finite automata: formal definition, how to design for a given language, how to describe language of a machine? Nondeterministic finite automata: formal definition, how to design for a given language, how to describe language of a machine? Regular expressions: formal definition, how to design for a given language, how to describe language of expression? Also: converting between different models. | Class of regular languages: what are the closure properties of this class? which languages are not in the class? using pumping lemma to prove nonregularity. |
Push-down automata: formal definition, how to design for a given language, how to describe language of a machine? Context-free grammars: formal definition, how to design for a given language, how to describe language of a grammar? | Class of context-free languages: what are the closure properties of this class? which languages are not in the class? |
Turing machines that always halt in polynomial time | \(P\) |
Nondeterministic Turing machines that always halt in polynomial time | \(NP\) |
Deciders (Turing machines that always halt): formal definition, how to design for a given language, how to describe language of a machine? | Class of decidable languages: what are the closure properties of this class? which languages are not in the class? using diagonalization and mapping reduction to show undecidability |
Turing machines formal definition, how to design for a given language, how to describe language of a machine? | Class of recognizable languages: what are the closure properties of this class? which languages are not in the class? using closure and mapping reduction to show unrecognizability |
Given a language, prove it is regular
Strategy 1: construct DFA recognizing the language and prove it works.
Strategy 2: construct NFA recognizing the language and prove it works.
Strategy 3: construct regular expression recognizing the language and prove it works.
“Prove it works” means …
Example: \(L = \{ w \in \{0,1\}^* \mid \textrm{$w$ has odd number of $1$s or starts with $0$}\}\)
Using NFA
Using regular expressions
Example: Select all and only the options that result in a true statement: “To show a language \(A\) is not regular, we can…”
Show \(A\) is finite
Show there is a CFG generating \(A\)
Show \(A\) has no pumping length
Show \(A\) is undecidable
Example: What is the language generated by the CFG with rules \[\begin{aligned} S &\to aSb \mid bY \mid Ya \\ Y &\to bY \mid Ya \mid \varepsilon \end{aligned}\]
Example: Prove that the language \(T = \{ \langle M \rangle \mid \textrm{$M$ is a Turing machine and $L(M)$ is infinite}\}\) is undecidable.
Example: Prove that the class of decidable languages is closed under concatenation.