\documentclass{article}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsthm}
\setlength{\oddsidemargin}{.25in}
\setlength{\evensidemargin}{.25in}
\setlength{\textwidth}{6in}
\setlength{\topmargin}{-0.5in}
\setlength{\textheight}{8.5in}
\newcommand{\header}[5]{
\renewcommand{\thepage}{\arabic{page}}
\noindent
\begin{center}
\framebox{
\vbox{
\hbox to 5.78in {\bf {#1} \hfill {#2}}
\vspace{2mm}
\hbox to 5.78in { {\Large \hfill Lecture {#3}\hfill} }
\vspace{2mm}
\hbox to 5.78in { {\it Instructor: {#4} \hfill Scribe: {#5}} }
}
}
\end{center}
\vspace*{3mm}
}
\newtheorem{theorem}{Theorem}
\newtheorem{claim}{Claim}
\newtheorem{definition}{Definition}
\newcommand{\lang}[1]{{\sc #1}}
\newcommand{\anglebrackets}[1]{{< \!\! #1 \!\! >}}
\begin{document}
\header{6.841/18.405J: Advanced Complexity}{Wednesday, February 12, 2003}
{Lecture 3}{Madhu Sudan}{Bobby Kleinberg}
\section{The language \lang{MinDNF}}
At the end of the last lecture, we introduced the language
\lang{MinDNF}, defined as follows
\begin{eqnarray*}
\mbox{\lang{MinDNF}} & = &
\{(\phi, k):\mbox{ $\phi$ is a DNF formula
and }\exists\mbox{ DNF formula $\phi'$ s.t. $|\phi'| \le k$ and } \\
& & \forall x_1 \ldots x_n \; \phi'(x_1,\ldots,x_n)
\Leftrightarrow \phi(x_1,\ldots,x_n) \}.
\end{eqnarray*}
To be precise about some of the terms used above,
we recall the following definitions relating to DNF formulae.
If $x_1,\ldots,x_n$ are Boolean variables whose negations
are denoted by $\overline{x_1},\ldots,\overline{x_n}$,
we refer to the symbols
$\{x_1,\ldots,x_n,\overline{x_1},\ldots,\overline{x_n}\}$
as {\em literals}. A DNF formula is a Boolean formula expressed
as a disjunction of clauses, each of which is a conjunction of one
or more literals. As a matter of notation, we use ``$\cdot$'' to
denote logical {\sc and}, and ``+'' to denote logical {\sc or}.
Thus a typical DNF is
\[
\phi(x_1,\ldots,x_n) = x_1 \cdot \overline{x_2} \cdot x_3 +
\overline{x_3} \cdot x_4 \cdot x_5
\]
The {\em size} of a DNF formula $\phi$, denoted $|\phi|$, is
defined as the number of literals appearing in $\phi$ (counted
with multiplicities).
As we noted last time, the language \lang{MinDNF} appears to be
strictly harder than NP. But a non-deterministic Turing machine with
access to an oracle for \lang{SAT} can accept \lang{MinDNF} in
polynomial time, as follows: first guess the formula $\phi'$,
then use the \lang{SAT} oracle to verify that
$(\phi'(x_1,\ldots,x_n) \Leftrightarrow \phi(x_1,\ldots,x_n))$
is a tautology. Thus we have
\[
\mbox{NP} \le \mbox{\lang{MinDNF}} \le \mbox{NP}^{\mbox{NP}},
\]
though it's not obvious which of the above inclusions is
strict. In fact, it was shown only recently that \lang{MinDNF}
is $\mbox{NP}^{\mbox{NP}}$-complete.
The language \lang{MinDNF} appears to have some extra power beyond
that of NP, and the power comes from the fact that it was able to
combine an existential step ($\exists \phi'$) with a
universal step ($\forall x_1,\ldots,x_n$), whereas NP
only has the power to perform existential steps. To formalize
this, we introduce the notion of {\em alternation}.
\section{Alternating Turing machines}
An alternating Turing machine (ATM) is defined analogously to an ordinary
Turing machine (infinite tape, finite alphabet, finite state control)
but with two special states, denoted ``$\exists$'' and ``$\forall$''.
Both of these special states have two outgoing transitions. In state
$\exists$, the machine accepts if and only if at least one of the
outgoing transitions accepts. In state $\forall$, the machine accepts
if and only if all of the outgoing transitions accept.
In thinking about the computation of an alternating Turing machine,
it is helpful to represent the computation as a tree, as in
figure~\ref{fig:comptree}. Each node of the tree is labeled
with the machine's configuration (tape contents, position of R/W head,
state of control) and has arrows pointing to
the configurations reachable by outgoing transitions
from that node. The outcome of the computation is determined
recursively as follows. A node which is in the machine's
accept state $q_f$ accepts. A node in the $\exists$ state accepts
if and only if at least one of its children accepts. A node in the
$\forall$ state accepts if and only if both of its children accept.
Every other node has only one child, and accepts if and only if its
child accepts. The machine accepts if and only if the root of its
computation tree accepts.
\begin{figure}
\begin{center}
\begin{picture}(300,200)
% Root
\put(150,180){\circle{20}}
\put(145,178){$C_0$}
\put(150,170){\vector(0,-1){10}}
% First fork
\put(150,150){\circle{20}}
\put(147,147){$\exists$}
\put(143,143){\vector(-1,-1){15}}
\put(157,143){\vector(1,-1){15}}
% Left subtree
\put(120,120){\circle{20}}
\put(115,118){$C_1$}
\put(113,113){\vector(-1,-1){15}}
\put(90,90){\circle{20}}
\put(87,87){$\forall$}
\put(83,83){\vector(-1,-1){15}}
\put(97,83){\vector(1,-1){15}}
% LL
\put(60,60){\circle{20}}
\put(57,57){$\forall$}
\put(53,53){\vector(-1,-1){15}}
\put(67,53){\vector(1,-1){15}}
\multiput(40,33)(0,-7){3}{\circle*{2}}
\multiput(80,33)(0,-7){3}{\circle*{2}}
% LR
\put(120,60){\circle{20}}
\put(115,58){$C_2$}
\put(120,50){\vector(0,-1){12}}
\multiput(120,33)(0,-7){3}{\circle*{2}}
% Right subtree
\put(180,120){\circle{20}}
\put(177,117){$\forall$}
\put(173,113){\vector(-1,-1){15}}
\put(187,113){\vector(1,-1){15}}
% RL
\put(150,90){\circle{20}}
\put(145,88){$C_3$}
\put(157,83){\vector(1,-1){15}}
\put(180,60){\circle{20}}
\put(177,57){$\exists$}
\put(173,53){\vector(-1,-1){15}}
\put(187,53){\vector(1,-1){15}}
\multiput(160,33)(0,-7){3}{\circle*{2}}
\multiput(200,33)(0,-7){3}{\circle*{2}}
% RR
\put(210,90){\circle{20}}
\put(205,88){$C_4$}
\put(217,83){\vector(1,-1){15}}
\put(240,60){\circle{20}}
\put(237,57){$\exists$}
\put(233,53){\vector(-1,-1){15}}
\put(247,53){\vector(1,-1){15}}
\multiput(220,33)(0,-7){3}{\circle*{2}}
\multiput(260,33)(0,-7){3}{\circle*{2}}
\end{picture}
\end{center}
\caption{Sample computation tree for an alternating Turing machine}
\label{fig:comptree}
\end{figure}
Note that non-deterministic Turing machines may be defined in the
same manner, but using only the $\exists$ state without the $\forall$
state. Thus, NP is the class of all languages accepted by a
polynomial-time ATM which uses the
$\exists$ but not the $\forall$ state. Similarly, co-NP is
the class of all languages accepted by a polynomial-time
ATM which uses the $\forall$ but not the $\exists$ state.
In studying the computational power of alternating Turing machines,
the following resources are of interest:
\begin{itemize}
\item {\bf TIME}, defined as the depth of the computation tree.
\item {\bf SPACE}, defined as the maximum space used in any
configuration in the tree.
\item {\bf ALTERNATION}, defined as the maximum (over all computation
paths) of the number of transitions from $\exists$ to $\forall$
and vice-versa.
\end{itemize}
\begin{definition} The class $\mathrm{ATISP}[a(n),t(n),s(n)]$ is defined
to be the class of languages accepted by an ATM which uses alternation
$\le a(n)$, time $\le t(n)$, and space $\le s(n)$. Note that
an expression such as $t(n)$ appearing in these inequalities is to be
interpreted literally as $t(n)$, not as $O(t(n))$.
As special cases of this definition, we have:
\begin{itemize}
\item $\mathrm{ATIME}(t(n)) := \mathrm{ATISP}[\infty,t(n),\infty]$
\item $\mathrm{ASPACE}(s(n)) := \mathrm{ATISP}[\infty,\infty,s(n)]$
\item $\Sigma_i^P := \bigcup_{p \in \mbox{poly}}
\mathrm{ATISP}[i,p(n),\infty]$ where the first special state is $\exists$.
\item $\Pi_i^P := \bigcup_{p \in \mbox{poly}}
\mathrm{ATISP}[i,p(n),\infty]$ where the first special state is $\forall$.
\end{itemize}
\end{definition}
\section{Relationships with classical resources}
The classes $\Sigma_i^P$ and $\Pi_i^P$ are very important
and will be the subject of future lectures. For now, we
are interested in exploring the relationship between the
classes $\mbox{ATIME}(t(n)), \mbox{ASPACE}(s(n))$ and
classical complexity classes defined using time- or
space-bounded deterministic Turing machines.
\subsection{Relating ATIME and SPACE}
\begin{theorem}
\label{thm:atime}
If $s(n) \ge \log n$,
$\mbox{SPACE}(s(n)) \subseteq \mbox{ATIME}(O(s(n)^2))
\subseteq \mbox{SPACE}(O(s(n)^2))$.
\end{theorem}
\proof{ The containment
$\mbox{ATIME}(s(n)) \subseteq \mbox{SPACE}(O(s(n)))$
is the less interesting half of the theorem, and we
will not spend much time on it. The idea of the
proof is that a deterministic machine may simulate
an alternating machine by performing a depth-first
search of its computation tree. If the tree has
depth $s(n)$ and we are doing the simulation in a
space-efficient manner, we will maintain the complete
configuration of the node currently being visited
(requiring $O(s(n))$ space), and we will maintain a
stack representing the computation path leading to
this node. This stack has depth $s(n)$, and each
entry in the stack must remember a sufficient set
of information to enable the simulation to ``back up''
to the previous configuration when it pops up the
stack. For example, it suffices for each stack
entry to store
\begin{enumerate}
\item its control state,
\item whether it moved the R/W head left or right,
\item what symbol did it overwrite on the work tape, and
\item how many of its outgoing transitions are
already known to be accepting.
\end{enumerate}
Each of these bits of information
requires only $O(1)$ space per stack entry.
The more interesting half of the theorem is the containment
$\mbox{SPACE}(s(n)) \subseteq \mbox{ATIME}(O(s(n)^2))$. To
prove this, we must begin by precisely formulating what we
mean by ``configuration of a Turing machine'' and quantifying
the number of bits needed to store a configuration. A
configuration (depicted in figure~\ref{fig:tmconf})
is specified by giving the contents of the
machine's work tape as a finite sequence of symbols
$\sigma_1 \sigma_2 \ldots \sigma_{s(n)}$ from the
alphabet $\Sigma$; if the machine is in state $q$ and its
read/write head is situated at position $j$ on the tape,
then we replace $\sigma_j$ in the sequence above with
$\anglebrackets{\sigma_j,q}$.
In the case of space-bounded
computation, where the input is given on a separate
read-only tape, the machine's configuration also contains
a counter representing the read-only head's position on this
tape. Thus, if the machine uses $s(n)$ space on its work tape,
the configuration is represented by a string of $s(n)$ symbols
from a finite alphabet, plus a counter taking values between
0 and $n$. The number of bits required to represent a configuration
is therefore $O(s(n) + \log n)$, which is $O(s(n))$ because of our
assumption that $s(n) \ge \log n$.
The transcript of a $\mbox{SPACE}(s(n))$ computation can be represented
by a sequence of configurations, each having size $O(s(n))$. The
length of the sequence is $2^{O(s(n))}$, since this is an upper bound
on the running time of any $\mbox{SPACE}(s(n))$ computation
when $s(n) \ge \log n$.
\begin{figure}
\begin{center}
\begin{picture}(500,110)
\put(0,10){
\begin{tabular}{|c|c|c|p{1cm}|c|c|}
\hline
$\sigma_0$ & $\sigma_1$ & $\sigma_2$ & \ldots &
$\sigma_{n-1}$ & $\sigma_n$ \\
\hline
& & $q$ & & & \\
\hline
\end{tabular}
}
\put(10,55){\line(1,0){40}}
\put(50,45){\oval(20,20)[tr]}
\put(60,45){\vector(0,-1){15}}
\put(350,105){\makebox(0,0){$O(s(n))$}}
\put(370,105){\vector(1,0){30}}
\put(330,105){\vector(-1,0){30}}
\put(300,80){\framebox(100,13){initial configuration}}
\multiput(300,60)(0,-20){2}{\framebox(100,13){}}
\multiput(350,33)(0,-7){3}{\circle*{2}}
\put(300,0){\framebox(100,13)[l]{\hspace{1mm} $q_f$}}
\put(320,0){\line(0,1){13}}
\put(270,45){\makebox(0,0){$2^{O(s(n))}$}}
\put(270,56){\vector(0,1){37}}
\put(270,35){\vector(0,-1){35}}
\end{picture}
\end{center}
\caption{Turing machine configuration and computation transcript}
\label{fig:tmconf}
\end{figure}
When describing
pseudo-code for alternating Turing machine algorithms,
we will write ``GUESS'' to refer to the
machine entering a $\exists$ state, and
``FORALL \{ CHECK1; CHECK2 \}'' to refer to
the machine entering a $\forall$ state in which
it accepts if and only if both CHECK1 and CHECK2
accept.
Given a machine $M$ which decides a language \lang{L} in
$\mbox{SPACE}(s(n))$, we must exhibit an ATM which accepts
\lang{L} and runs in time $O(s(n)^2)$. The algorithm will
be based on the following procedure
$\mbox{CHECK}(C_0,C_f,T)$ which checks
whether $M$ has a computation path which
starts in configuration $C_0$ at time 0
and ends in $C_f$ at time $T$, by guessing
the configuration at the midpoint of the
computation and recursively checking both halves.
\begin{tabbing}
$\mbox{CHECK}(C_0, C_f, T)$: \\
\hspace{0.5in} \= \kill
\> 1. GUESS a configuration $C_{\mathrm{mid}}$. \\
\> 2. FORALL $\{ \mbox{CHECK}(C_0, C_{\mathrm{mid}}, \lfloor T/2 \rfloor);
\; \mbox{CHECK}(C_{\mathrm{mid}}, C_f, \lceil T/2 \rceil) \}$
\end{tabbing}
The running time of $\mbox{CHECK}(C_0, C_f, T)$ is
$O(s(n) \log T)$, as is easily established by induction.
Indeed, step 1 requires time $O(s(n))$ --- the size of
$C_{\mathrm{mid}}$ --- and, by the induction
hypothesis, step 2 requires time
$O(s(n) \log(T/2)) = O(s(n) \log T) - O(s(n))$.
An ATM which runs in time $O(s(n)^2)$ and simulates $M$
can now be described
as follows: guess an accepting configuration $C_f$, and run
$\mbox{CHECK}(C_0, C_f, 2^{O(s(n))})$. The initial guess
of $C_f$ requires time $O(s(n))$, and the CHECK step requires
time $O(s(n) \log(2^{O(s(n))})) = O(s(n) \cdot s(n)) = O(s(n)^2)$,
as desired. $\Box$
}
\section{A complete problem for ATIME(poly)}
Theorem~\ref{thm:atime} establishes that ATIME(poly) = PSPACE,
which in turn sheds light on the phenomenon of PSPACE-completeness.
Define \lang{TQBF} to be the language
\[
\mbox{\lang{TQBF}} =
\{ \phi | \exists x_1 \forall x_2 \exists \ldots \forall x_n \;
\phi(x_1,\ldots,x_n) \mbox{ is true} \}.
\]
Note the analogy with \lang{SAT}, which is the language
\[
\mbox{\lang{SAT}} =
\{ \phi | \exists x_1 \ldots x_n \; \phi(x_1,\ldots,x_n)
\mbox{ is true} \}.
\]
Just as SAT is NP-complete, it is natural to guess that TQBF is
ATIME(poly)-complete, since it captures the expressive
power of alternating quantifiers in much the same way that
SAT captures the expressive power of existential quantifiers.
This indeed turns out to be the case, as we will see in
Theorem~\ref{thm:tqbf}
below. Thus, the PSPACE-completeness of \lang{TQBF} follows
directly from the fact that ATIME(poly) = PSPACE, and indeed
it is tempting to think of this as the ``real reason'' why
\lang{TQBF} is PSPACE-complete.
\begin{theorem}
\label{thm:tqbf}
TQBF is ATIME(poly)-complete.
\end{theorem}
\noindent \textbf{Proof Sketch.} It
is clear how to build an ATM that decides TQBF in polynomial
time: the machine passes through an initial sequence of
$\exists$ and $\forall$ states in which the truth values of
$x_1,\ldots,x_n$ are set, then runs a polynomial-time computation
to determine the truth value of $\phi(x_1,\ldots,x_n)$.
To prove that every language in ATIME(poly) is reducible to
TQBF, proceed as in the proof of the Cook-Levin theorem. The
computation of an ATM which runs in time $t(n)$ can be represented
by a sequence of $t(n)$ configurations, each of size $O(t(n))$.
One introduces $O(t(n)^2)$ Boolean variables to completely
describe the machine's configuration at each time step, and
writes down a formula which expresses the fact that the computation
started in configuration $C_0$, finished in an accepting
configuration, and performed legal state transitions at every
step. The alternating quantifiers are used to express the possible
outcomes of the steps in which the computation entered a
$\forall$ or $\exists$ state. One verifies, by construction,
that the size of the formula is polynomial in the input length,
and that the quantified formula is true if and only if the ATM
accepts that input. $\Box$
Extending the intuition we've gained from \lang{SAT} and
\lang{TQBF}, it is natural to define the following two
languages $i \exists \mbox{TQBF}, \, i \forall \mbox{TQBF}$
and to guess that they are complete for $\Sigma_i^P, \,
\Pi_i^P$, respectively.
In a future
lecture, we'll see that this is the case.
\begin{definition} The classes $i \exists \mbox{TQBF}$ and
$i \forall \mbox{TQBF}$ are defined by
\begin{eqnarray*}
i \exists \mbox{TQBF} & = & \{ \phi |
\exists x^1_1 \ldots x^1_{n_1} \forall x^2_1 \ldots x^2_{n_2}
\ldots \ldots \exists x^i_1 \ldots x^i_{n_i} \;
\phi(x^1_1,\ldots,x^i_{n_i}) \mbox{ is true} \} \\
i \forall \mbox{TQBF} & = & \{ \phi |
\forall x^1_1 \ldots x^1_{n_1} \exists x^2_1 \ldots x^2_{n_2}
\ldots \ldots \forall x^i_1 \ldots x^i_{n_i} \;
\phi(x^1_1,\ldots,x^i_{n_i}) \mbox{ is true} \}
\end{eqnarray*}
\end{definition}
\section{Philosophical aside: debate systems}
Resources studied in complexity theory are interesting to the extent
that they characterize the expressive power of interesting phenomena,
and one is led to ask what
type of phenomenon is characterized by the
resource of alternation. One way of describing the
answer is in terms of ``debate systems.''
Imagine a scenario in which a verifier V (with the power to
perform deterministic polynomial-time computations) is charged
with determining the truth or falsehood of a statement of the
form ``$x \in L$,'' where $x$ is a word and $L$ is a language.
The verifier interacts with two entities, Y and N, both of whom
have unlimited computational power. Y does everything in her
power to convince V that the answer is yes, while N does everything
in her power to convince V that the answer is no. V assumes that
Y and N have infinite computational power, and that they are both
trying as hard as possible to argue for their respective answers.
For what languages can V discover, for each $x$, whether $x \in L$?
The answer depends on the type of interaction allowed between V, Y,
and N.
\begin{description}
\item[No interaction:] In this case, V must decide on its own
whether $x \in L$; the answer can be determined if and only if
L is in P.
\item[One-sided interaction with Y:] Suppose Y is allowed to
send advice to V, but N is not allowed to refute the advice.
The class of languages which can be decided by such a protocol
is NP. For instance, if L is SAT and $\phi$ is a formula, Y will
try to send V a satisfying assignment. If $\phi$ is satisfiable,
V can verify this in polynomial time using Y's advice.
If $\phi$ is not satisfiable,
V can verify that Y's assignment does not satisfy $\phi$, and will
conclude that $\phi \not\in SAT$ based on the assumption that Y
is omnipotent and is trying as hard as possible to argue
that $\phi \in \mbox{SAT}$.
\item[Full-fledged debate between Y and N:] If Y and N are both
allowed to send messages to V, and to respond to each other's
messages, the class of languages which can be decided by such
a protocol is ATIME(poly) = PSPACE. As an example, if
$(\phi, k)$ is an instance of \lang{MinDNF}, then Y begins
by naming a formula $\phi'$ of length $\le k$ and asserting
that $\phi' \Leftrightarrow \phi$. N attempts to refute this
claim by sending an assignment $x_1,\ldots,x_n$ to distinguish
$\phi'$ from $\phi$. V computes
$\phi'(x_1,\ldots,x_n)$ and $\phi(x_1,\ldots,x_n)$, and tests
whether they have the same truth value. If so, then N failed
to refute Y and we conclude that $(\phi,k) \in \mbox{\lang{MinDNF}}$.
If not, then N's refutation was successful and we conclude that
$(\phi,k) \not\in \mbox{\lang{MinDNF}}$.
\end{description}
\section{Space-bounded alternating computation}
Theorem~\ref{thm:atime} established a relation between
time-bounded alternating computation and space-bounded
deterministic computation. We will now relate {\em space}-bounded
alternating computation to {\em time}-bounded deterministic
computation!
\begin{theorem}
\label{thm:aspace}
Assume $s(n) \ge \log n$. Then
$\mbox{ASPACE}(O(s(n))) = \mbox{TIME}(2^{O(s(n))})$.
\end{theorem}
\proof{
We will abbreviate $2^{O(s(n))}$ as $2^s$ for simplicity.
Once again, we can prove the containment
$\mbox{ASPACE}(O(s(n))) \subseteq \mbox{TIME}(2^s)$
by reasoning about computation trees for ATM's.
Actually, this time we represent the computation using a
directed graph rather than a tree.
An ATM which runs in space $O(s(n))$ has $2^s$
possible configurations (recalling that $s(n) \ge \log n$),
and one can define the directed graph $G$ whose vertices are
the possible configurations and whose edges represent the
legal transitions from one configuration to another. There
is a straightforward deterministic algorithm to enumerate all the
vertices and edges of this graph in time $2^s$.
One can then mark all of the vertices which represent
accepting configurations, via the following algorithm:
\begin{enumerate}
\item Set $t = 0$.
\item For each vertex $v$ of $G$:
\begin{enumerate}
\item If $M$'s control state at $v$ is the accepting state,
mark $v$.
\item If the state at $v$ is ``$\exists$'', and at least one
edge $(v,w)$ points to a marked vertex $w$, mark $v$.
\item If the state at $v$ is ``$\forall$'', and both
edges $(v,w)$ point to a marked vertex $w$, mark $v$.
\item Else there is a unique edge $(v,w)$. If $w$
is marked, then mark $v$.
\end{enumerate}
\item Increment $t$. If $t > |V(G)|$, halt; else return
to step 2.
\end{enumerate}
The inner loop runs in time $2^s$, the time required
to look up $w$ and check whether it is marked. There are
$2^s$ iterations of the outer loop, and each contains
$2^s$ iterations of the inner loop. Thus the entire
algorithm runs in time $2^{O(s)}$, as desired.
To prove the containment
$\mbox{TIME}(2^s) \subseteq \mbox{ASPACE}(O(s(n)))$,
we model the computation of a deterministic Turing machine $M$
as a tableau whose rows represent the sequence of configurations.
(See figure~\ref{fig:tableau}.) If the Turing machine runs
in time $2^s$, then it also runs in space $2^s$, so the tableau
has $2^s$ rows and $2^s$ columns.
Naively, the algorithm should guess the contents of the
entire tableau and verify the tableau's correctness; however,
guessing the tableau's contents all at once would require
space $2^s$. The key observation which makes the simulation
space-efficient is that the correctness of a cell in the tableau
can be checked by looking at just three cells in the row above.
Formally, the ATM uses a subroutine $\mbox{CHECK}(t,i,\sigma)$
defined as follows and illustrated in figure~\ref{fig:tableau}.
\begin{tabbing}
$\mbox{CHECK}(t,i,\sigma)$ \\
\hspace{0.5in} \= \kill
\> 1. GUESS $\{ \sigma_1, \sigma_2, \sigma_3 \}$ \\
\> 2. Verify that $(\sigma_1, \sigma_2, \sigma_3) \rightarrow \sigma$
is a legal transition \\
\> 3. FORALL $\{ \mbox{CHECK}(t-1,i-1,\sigma_1);
\mbox{CHECK}(t-1,i,\sigma_2); \mbox{CHECK}(t-1,i+1,\sigma_3) \}$.
\end{tabbing}
Note that CHECK requires $O(s(n))$ bits to store $t$ and $i$,
and this is really the only space required by the algorithm.
Let's adopt, for simplicity, the convention that an accepting
computation halts in state $q_f$, with the read/write head
at the leftmost cell of the tape. In that case,
$\mbox{CHECK}(2^s, 0, q_f)$ runs in space $O(s(n))$
and determines whether $M$ accepts the input, as desired. $\Box$
}
\begin{figure}
\begin{center}
\begin{picture}(250,200)
\multiput(50,170)(0,-13){14}{\line(1,0){169}}
\multiput(50,170)(13,0){14}{\line(0,-1){169}}
\put(57,5){\makebox(0,0){$q_f$}}
\put(135,59){\makebox(0,0){$\sigma$}}
\put(122,72){\makebox(0,0){$\sigma_1$}}
\put(135,72){\makebox(0,0){$\sigma_2$}}
\put(148,72){\makebox(0,0){$\sigma_3$}}
\put(135,180){\makebox(0,0){$2^s$}}
\put(125,180){\vector(-1,0){75}}
\put(145,180){\vector(1,0){75}}
\put(35,85){\makebox(0,0){$2^s$}}
\put(35,95){\vector(0,1){75}}
\put(35,75){\vector(0,-1){75}}
\end{picture}
\end{center}
\caption{Tableau with $\mbox{CHECK}(t,i,\sigma)$ configuration}
\label{fig:tableau}
\end{figure}
\end{document}