\documentclass{article}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsmath}
\usepackage{latexsym}
\usepackage{graphicx}
\setlength{\oddsidemargin}{.25in}
\setlength{\evensidemargin}{.25in}
\setlength{\textwidth}{6in}
\setlength{\topmargin}{-0.5in}
\setlength{\textheight}{8.5in}
\newcommand{\header}{
\renewcommand{\thepage}{\arabic{page}}
\noindent
\begin{center}
\framebox{
\vbox{
\hbox to 5.78in {\bf 6.841/18.405J: Advanced Complexity \hfill Monday, March 10th, 2003}
\vspace{2mm}
\hbox to 5.78in { {\Large \hfill Lecture 9\hfill} }
\vspace{2mm}
\hbox to 5.78in { {\it Instructor: Madhu Sudan \hfill Scribe: Fu Liu} }
}
}
\end{center}
\vspace*{3mm}
}
\newtheorem{theorem}{Theorem}
\newtheorem{claim}{Claim}
\newtheorem{lemma}{Lemma}
\begin{document}
\header
\begin{itemize}
\item Counting Problems
\item Complexity of Unique Satisfiability
\item \#P
\end{itemize}
\section{Counting Computations}
Considering counting the number of accepting paths in a nondeterministic TM motivates the definition of the class \#P.
{\bf Definition:} A function $f: \{0, 1\}^* \to Z^{\ge 0}$ is in \#P, if there exists a polynomial time nondeterministic TM M such that
$$f(x) = |\{y | M(x,y) \mbox{accepts}\}|.$$
\section{Complexity of Unique Satisfiability}
\subsection{Motivation}
Motivated by NP completeness problems, Diffie and Hellman '78 introduced \lq\lq Public Key Crypto. System'. Consider the following examples:
\begin{enumerate}
\item Given a formula $\phi$ and one of its satisfying assignments $a,$ delete $a.$ That is: $$(\phi, a) \to \phi.$$
\item Given two primes $p$ and $q,$ multiply them to get $p \cdot q.$ That is: $$(p, q) \to p \cdot q.$$
\end{enumerate}
Note that the above two maps are both easy to be calculated. However, if we consider the inverse of these two maps, the first one is to find a satisfying assignment of $\phi,$ the second is to factor a number into two primes. Both of the two inverse are hard to compute. Then how does the hardness of factoring relate to that of finding a satisfying assignment? Several similar questions were being asked at the time:
\begin{enumerate}
\item How did "changing the problem" (SAT $\to$ Factoring) affect its hardness?
\item What if $(\phi, a)$ were chosen at random?
\item Note that the first mapping is an information-losing map, which loses information of $a,$ and the second one is information-preserving since given $p \cdot q,$ there's only one way to factor it if given $p \le q.$ Then what if $\phi$ has only one satisfying assignment $a.$? Actually in this case, the first mapping becomes information-preserving.
\end{enumerate}
\subsection{Formalizing the Problem}
By the discussion of last section, one may think of this problem: \lq\lq Given a formula $\phi$ that has only one satisfying assignment $a,$ compute the assignment $a.$" This leads us to a new language USAT. Before give out the formal definition of USAT, we introduce Promise Problems first.
{\bf Definition:} A promise problem $\Pi$ consists of two sets, $\Pi_{YES}$ and $\Pi_{NO},$ both of which are subsets of $\{0,1\}^*$ and whose intersection is the empty set, i.e.
\begin{itemize}
\item $\Pi = (\Pi_{YES}, \Pi_{N)})$
\item $\Pi_{YES}, \Pi_{NO} \subseteq \{0,1\}^*$
\item $\Pi_{YES} \cap \Pi_{NO} = \emptyset.$
\end{itemize}
Given a promise problem, our goal is to find a algorithm A such that given $x,$ $x \in \Pi_{YES} \Rightarrow A(x) = 1$ and $x \in \Pi_{NO} \Rightarrow A(x) = 0.$
Now we define USAT as a promise problem:
\begin{itemize}
\item USAT$ = (\Pi_{YES}, \Pi_{N)})$
\item $\Pi_{YES} = \{ \phi | \phi \mbox{ has exactly one satisfying assignment.} \}$
\item $\Pi_{NO} = \{ \phi | \phi \mbox{ has no satisfying assignments.} \}$
\end{itemize}
\subsection{Hardness of USAT}
Let's look back the inverse of the mapping $(\phi, a) \to \phi.$ If $\phi$ is given from $USAT_{YES},$ since $a$ is unique, the map is information-preserving. So the inverse exists. But we don't know if it is easy to find the answer. Valiant and Vaziram showed that if we can find an algorithm that compute the inverse mapping efficiently, thne NP = RP:
{\bf Theorem:} If USAT $\in$ P, then NP = RP.
This theorem is proved by the following lemma:
{\bf Lemma:} USAT is NP-hard under random reduction from SAT.
We need reduction: $\phi \in SAT? \longrightarrow \psi \in USAT?$ such that:
\begin{itemize}
\item $\phi \in SAT \longrightarrow \psi$ has one satisfying assignment with probability $1 \over {poly(n)}$.
\item $\phi \not\in SAT \longrightarrow \psi \not\in USAT$ with probability $1.$
\end{itemize}
\subsection{Reduction Idea}
We assume that $\phi$ has $M$ satisfying assignments, consider reduction: $$\phi(x) \longrightarrow \psi(x) = \phi(x) \wedge p(x).$$
Pick a random function $h: \{0, 1\}^n \to \{1, 2, \dots, 4M\},$ where $n$ is the number of variables of $\phi.$ Let $p(x) = \lq\lq h(x) = 1,$ then we claim that $$Pr[(\exists ! x: \phi(x) = 1 \wedge h(x) = 1) \wedge (\forall y: \phi(y) \neq 1 \vee h(y) \neq 1)] \ge 1/8.$$
We will give the proof of this claim later. Now let's look at some problems with the implementing ideas:
\begin{enumerate}
\item $h, p$ are not succinct.
\item $M$ is not known.
\end{enumerate}
We have amendments to these problems:
\begin{enumerate}
\item Pick $h$ from a pairwise independent family of hash functions, $\cal H,$ randomly.
{\bf Definition:} $\cal H$ $= \{h: \{0,1\}^n \to \{1, 2, \dots, \alpha M\}\}$ is a pairwise independent family of hash functions if for $\forall x \neq y \in \{0,1\}^n, \forall a, b \in \{0,1\}^m, Pr_{h \in \cal H}[h(x) = a \wedge h(y) = b] = {1 \over {(\alpha M)^2}}.$
Let $\cal H$ $= \{h_{c,d} | c, d \in \{1, 2, \dots, \alpha M\} \},$ where $h_{c,d}(x) = (c \cdot x + d)$ (mod $\alpha M).$ It's clear that $\cal H$ is a pairwise independent family of hash functions.
Now, let's look back our reduction. Pick $h$ from the above defined $\cal H.$
Let $S \subseteq \{0,1\}^n$ be the set of satisfying assignments of $\phi,$ $|S| = M.$ Then,
$$Pr_{h \in {\cal H}} [\exists x, y \in S: x \neq y, h(x) = h(y) = 1] \le \sum_{x,y} Pr_{h \in {\cal H}} [h(x) = h(y) = 1] \le |S|^2 \cdot {1 \over {(\alpha M)^2}} = {1 \over {\alpha^2}}.$$
$$Pr_{h \in {\cal H}} [\exists x \in S: h(x) = 1] \ge \sum_x Pr_{h \in {\cal H}} [h(x) = 1] - \sum_{x,y} Pr_{h \in {\cal H}} [h(x) = h(y) = 1] = {1 \over \alpha} - {1 \over {\alpha^2}}.$$
Therefore,
\begin{eqnarray}
& & Pr_{h \in {\cal H}} [\exists ! x \in S: h(x) = 1] \nonumber \\
&\ge& Pr_{h \in {\cal H}} [\exists x \in S: h(x) = 1] - Pr_{h \in {\cal H}} [\exists x, y \in S: x \neq y, h(x) = h(y) = 1] \nonumber \\
&\ge& ({1 \over \alpha} - {1 \over {\alpha^2}}) - {1 \over {\alpha^2}} \nonumber \\
&=& {1 \over \alpha} - {2 \over {\alpha^2}}. \nonumber
\end{eqnarray}
Our goal is that $Pr_{h \in {\cal H}} [\exists ! x \in S: h(x) = 1] \ge$ some constant. Then it satisfies iff $\alpha > 2.$ In particular, when pick $\alpha = 4,$ we proved our claim that
$$Pr[(\exists ! x: \phi(x) = 1 \wedge h(x) = 1) \wedge (\forall y: \phi(y) \neq 1 \vee h(y) \neq 1)] = Pr_{h \in {\cal H}} [\exists ! x \in S: h(x) = 1] \ge 1/8.$$
\item It turns out that if we estimate $M$ within a factor of $2,$ the probability calculation still works. Therefore, we only need compute $M'$ such that $M \le M' \le 2M.$ Thus, we randomly pick $M'$ from $\{1, 2, 4, \dots, 2^n\}.$
\end{enumerate}
\subsection{Summarizing the Reduction}
\begin{itemize}
\item Given $\phi$
\item Pick $M' \in_R \{1, 2, 4, \dots, 2^n\}$
\item Let $\cal H$ be a pairwise independent family of hash functions mapping $\{0, 1\}^n \to \{1, 2, \dots, 4M' - 1\}$
\item Pick $h \in \cal H$ at random, let $\psi(x) = \phi(x) \wedge (h(x) = 1).$
\end{itemize}
\section{Toda's Theorem}
{\bf Theorem:} PH $\subseteq$ $P^{\#P}.$
\end{document}