Probabilistically checkable proof

In computational complexity theory, a probabilistically checkable proof (PCP) is a type of proof that can be checked by a randomized algorithm using a bounded amount of randomness and reading a bounded number of bits of the proof. The algorithm is then required to accept correct proofs and reject incorrect proofs with very high probability. A standard proof (or certificate), as used in the verifier-based definition of the complexity class NP, also satisfies these requirements, since the checking procedure deterministically reads the whole proof, always accepts correct proofs and rejects incorrect proofs. However, what makes them interesting is the existence of probabilistically checkable proofs that can be checked by reading only a few bits of the proof using randomness in an essential way.

Probabilistically checkable proofs give rise to many complexity classes depending on the number of queries required and the amount of randomness used. The class PCP[r(n),q(n)] refers to the set of decision problems that have probabilistically checkable proofs that can be verified in polynomial time using at most r(n) random bits and by reading at most q(n) bits of the proof.[1] Unless specified otherwise, correct proofs should always be accepted, and incorrect proofs should be rejected with probability greater than 1/2. The PCP theorem, a major result in computational complexity theory, states that PCP[O(log n),O(1)] = NP.

Definition

[edit]

Given a decision problem L (or a language L with its alphabet set Σ), a probabilistically checkable proof system for L with completeness c(n) and soundness s(n), where 0 ≤ s(n) ≤ c(n) ≤ 1, consists of a prover and a verifier. Given a claimed solution x with length n, which might be false, the prover produces a proof π which states x solves L (xL, the proof is a string ∈ Σ). And the verifier is a randomized oracle Turing Machine V (the verifier) that checks the proof π for the statement that x solves L(or xL) and decides whether to accept the statement. The system has the following properties:

  • Completeness: For any xL, given the proof π produced by the prover of the system, the verifier accepts the statement with probability at least c(n),
  • Soundness: For any xL, then for any proof π, the verifier mistakenly accepts the statement with probability at most s(n).

For the computational complexity of the verifier, we have the randomness complexity r(n) to measure the maximum number of random bits that V uses over all x of length n and the query complexity q(n) of the verifier is the maximum number of queries that V makes to π over all x of length n.

In the above definition, the length of proof is not mentioned since usually it includes the alphabet set and all the witnesses. For the prover, we do not care how it arrives at the solution to the problem; we care only about the proof it gives of the solution's membership in the language.

The verifier is said to be non-adaptive if it makes all its queries before it receives any of the answers to previous queries.

The complexity class PCPc(n), s(n)[r(n), q(n)] is the class of all decision problems having probabilistically checkable proof systems over binary alphabet of completeness c(n) and soundness s(n), where the verifier is nonadaptive, runs in polynomial time, and it has randomness complexity r(n) and query complexity q(n).

The shorthand notation PCP[r(n), q(n)] is sometimes used for PCP1, 1/2[r(n), q(n)]. The complexity class PCP is defined as PCP1, 1/2[O(log n), O(1)].

History and significance

[edit]

The theory of probabilistically checkable proofs studies the power of probabilistically checkable proof systems under various restrictions of the parameters (completeness, soundness, randomness complexity, query complexity, and alphabet size). It has applications to computational complexity (in particular hardness of approximation) and cryptography.

The definition of a probabilistically checkable proof was explicitly introduced by Arora and Safra in 1992,[2] although their properties were studied earlier. In 1990 Babai, Fortnow, and Lund proved that PCP[poly(n), poly(n)] = NEXP, providing the first nontrivial equivalence between standard proofs (NEXP) and probabilistically checkable proofs.[3] The PCP theorem proved in 1992 states that PCP[O(log n),O(1)] = NP.[2][4]

The theory of hardness of approximation requires a detailed understanding of the role of completeness, soundness, alphabet size, and query complexity in probabilistically checkable proofs.

Properties

[edit]

From computational complexity point of view, for extreme settings of the parameters, the definition of probabilistically checkable proofs is easily seen to be equivalent to standard complexity classes. For example, we have the following for different setting of PCP[r(n), q(n)]:

  • PCP[0, 0] = P (P is defined to have no randomness and no access to a proof.)
  • PCP[O(log(n)), 0] = P (A logarithmic number of random bits doesn't help a polynomial time Turing machine, since it could try all possibly random strings of logarithmic length in polynomial time.)
  • PCP[0,O(log(n))] = P (Without randomness, the proof can be thought of as a fixed logarithmic sized string. A polynomial time machine could try all possible logarithmic sized proofs in polynomial time.)
  • PCP[poly(n), 0] = coRP (By definition of coRP.)
  • PCP[0, poly(n)] = NP (By the verifier-based definition of NP.)

The PCP theorem and MIP = NEXP can be characterized as follows:

  • PCP [O(log n),O(1)] = NP (the PCP theorem)
  • PCP [poly(n),O(1)] = PCP [poly(n),poly(n)] = NEXP (MIP = NEXP).

It is also known that PCP[r(n), q(n)] ⊆ NTIME(poly(n,2O(r(n))q(n))). In particular, PCP[log n, poly(n)] = NP. On the other hand, if NPPCP [o(log n),o(log n)] then P = NP.[2]

Linear PCP

[edit]

A Linear PCP is a PCP in which the proof is a vector of elements of a finite field , and such that the PCP oracle is only allowed to do linear operations on the proof. Namely, the response from the oracle to a verifier query is a linear function . Linear PCPs have important applications in proof systems that can be compiled into SNARKs.

References

[edit]
  1. ^ Arora, Sanjeev; Barak, Boaz (2007), Computational Complexity: A Modern Approach, Cambridge University Press, p. 241, ISBN 978-0-521-42426-4
  2. ^ a b c Arora, Sanjeev; Safra, Shmuel (1998), "Probabilistic checking of proofs: A new characterization of NP", Journal of the ACM, 45 (1): 70–122, doi:10.1145/273865.273901, S2CID 751563
  3. ^ Babai, László; Fortnow, Lance; Lund, Carsten (1990), "Nondeterministic exponential time has two-prover interactive protocols", Proceedings of the 31st Annual Symposium on Foundations of Computer Science (FOCS 1990), pp. 16–25, CiteSeerX 10.1.1.130.9311, doi:10.1109/FSCS.1990.89520, ISBN 978-0-8186-2082-9, S2CID 38429596
  4. ^ Arora, Sanjeev; Lund, Carsten; Motwani, Rajeev; Sudan, Madhu; Szegedy, Mario (1998), "Proof verification and the hardness of approximation problems", Journal of the ACM, 45 (3): 501–555, doi:10.1145/278298.278306, S2CID 8561542
[edit]