Quantum Computing

First published Sun Dec 3, 2006; substantive revision Wed Dec 6, 2006

Combining physics, mathematics and computer science, quantum computing has developed in the past two decades from a visionary idea to one of the most fascinating areas of quantum mechanics. The recent excitement in this lively and speculative domain of research was triggered by Peter Shor (1994) who showed how a quantum algorithm could exponentially "speed-up" classical computation and factor large numbers into primes much more rapidly (at least in terms of the number of computational steps involved) than any known classical algorithm. Shor's algorithm was soon followed by several other algorithms that aimed to solve combinatorial and algebraic problems, and in the last few years theoretical study of quantum systems serving as computational devices has achieved tremendous progress. Common belief has it that the implementation of Shor's algorithm on a large scale quantum computer would have devastating consequences for current cryptography protocols which rely on the premiss that all known classical worst-case algorithms for factoring take time exponential in the length of their input (see, e.g., Preskill 2005). Consequently, experimentalists around the world are engaged in tremendous attempts to tackle the technological difficulties that await the realization of such a large scale quantum computer. But regardless whether these technological problems can be overcome (Unruh 1995, Ekert and Jozsa 1996, Haroche and Raimond 1996), it is noteworthy that no proof exists yet for the general superiority of quantum computers over their classical counterparts.

The philosophical interest in quantum computing is threefold: First, from a social-historical perspective, quantum computing is a domain where experimentalists find themselves ahead of their fellow theorists. Indeed, quantum mysteries such as entanglement and nonlocality were historically considered a philosophical quibble, until physicists discovered that these mysteries might be harnessed to devise new efficient algorithms. But while the technology for isolating 5 or even 7 qubits (the basic unit of information in the quantum computer) is now within reach (Schrader et al. 2004, Knill et al. 2000), only a handful of quantum algorithms exist, and the question whether these can solve classically intractable computational problems is still open. Next, from a more philosophical perspective, advances in quantum computing may yield foundational benefits. It may turn out that the technological capabilities that allow us to isolate quantum systems by shielding them from the effects of decoherence for a period of time long enough to manipulate them will also allow us to make progress in some fundamental problems in the foundations of quantum theory itself. Indeed, the development and the implementation of efficient quantum algorithms may help us understand better the border between classical and quantum physics, hence elucidate an important problem, namely, the measurement problem, that so far resists a solution. Finally, the idea that abstract mathematical concepts such as complexity and (in)tractability may not only be translated into physics, but also re-written by physics bears directly on the autonomous character of computer science and the status of its theoretical entities—the so-called "computational kinds". As such it is also relevant to the long-standing philosophical debate on the relationship between mathematics and the physical world.

  • 1. A Brief History of the Field
  • 1.1 The Physical Church-Turing Thesis
  • 1.2 Physical "short-cuts" of computation
  • 1.3 Milestones
  • 2. Basics
  • 2.1 The Qubit
  • 2.2 Quantum Gates
  • 2.3 Quantum Circuits
  • 3. Quantum Algorithms
  • 3.1 Quantum-Circuit-Based Algorithms
  • 3.1.1 The Deutsch Oracle
  • 3.1.2 The Deutsch-Josza Oracle
  • 3.1.3 The Simon Oracle
  • 3.1.4 Shor's Algorithm
  • 3.1.4 Grover's Algorithm
  • 3.2 Adiabatic Algorithms
  • 3.3 Measurement-Based Algorithms
  • 3.4 Topological-Quantum-Field-Theory (TQFT) Algorithms
  • 3.5 Realizations
  • 4. Philosophical Implications
  • 4.1 What Is Quantum in Quantum Computing?
  • 4.2 Experimental Metaphysics
  • 4.3 Are There Computational Kinds?
  • Bibliography
  • Other Internet Resources
  • Related Entries

1. A Brief History of the Field

1.1 Physical Computational Complexity

The mathematical model for a "universal" computer was defined long before the invention of computers and is called the Turing machine (Turing 1936). A Turing machine consists of an unbounded tape, a head that is capable of reading from the tape and of writing onto it and can occupy an infinite number of possible states, and an instruction table (a transition function). This table, given the head's initial state and the input it reads from the tape in that state, determines (a) the symbol that the head will write on the tape, (b) the internal state it will occupy, and (c) the displacement of the head on the tape. In 1936 Turing showed that since one can encode the instruction table of a Turing machine T and express it as a binary number #(T), there exists a universal Turing machine U that can simulate the instruction table of any Turing machine on any given input with at most a polynomial slowdown (i.e., the number of computational steps required by U to execute the original program T on the original input will be polynomially bounded in #(T)). That the Turing machine model (what we now days call "an algorithm") captures the concept of computability in its entirety is the essence of the Church-Turing thesis, according to which any effectively calculable function can be computed using a Turing machine. Admittedly, no counterexample to this thesis (which is the result of convergent ideas of Turing, Post, Kleene and Church) has yet been found. But since it identifies the class of computable functions with the class of those functions which are computable using a Turing machine, this thesis involves both a precise mathematical notion and an informal and intuitive notion, hence cannot be proved or disproved. Simple cardinality considerations show, however, that not all functions are Turing-computable (the set of all Turing machines is countable, while the set of all functions from the natural numbers to the natural numbers is not), and the discovery of this fact came as a complete surprise in the 1930s (Davis 1958).

Computability, or the question whether a function can be computed, is not the only question that interests computer scientists. The cost of computing a function is also of great importance, and this cost, also known as computational complexity, is measured naturally in the physical resources (e.g., time, space, energy) invested in order to solve the computational problem at hand. Computer scientists classify computational problems according to the way their cost function behaves as a function of their input size, n, (the number of bits required to store the input) and in particular, whether it increases exponentially or polynomially with n. Tractable problems are those which can be solved in polynomial cost, while intractable problems are those which can only be solved in an exponential cost (the former solutions are commonly regarded as efficient although an exponential-time algorithm could turn out to be more efficient than a polynomial-time algorithm for some range of input sizes). If we further relax the requirement that a solution to a computational problem be always correct, and allow probabilistic algorithms with a negligible probability of error, we can dramatically reduce the computational cost. Probabilistic algorithms are non-deterministic Turing machines whose transition function can randomly change the head's configuration in one of several possible ways. The most famous example of an algorithm of this kind is the probabilistic algorithm that decides whether a given natural number is prime in a polynomial number of steps (Rabin 1976).

Using this notion we can further refine the distinction between tractable and intractable problems. The class P (for Polynomial) is the class that contains all the computational problems that can be solved by a deterministic Turing machine at a polynomial cost. The class NP (for Non-deterministic Polynomial) is the class that contains all those computational problems whose proposed solution ("guessed" by the non-deterministic Turing machine) can be verified by a deterministic Turing machine at polynomial cost. The most famous problems in NP are called "NP-complete". The term "complete" designates the fact that these problems stand or fall together: Either they are all tractable, or none of them is! If we knew how to solve an NP-complete problem efficiently (i.e., with a polynomial cost) we could solve any problem in NP with only polynomial slowdown (Cook 1971). Today we know of hundreds of examples of NP-complete problems (Garey and Johnson 1979), all of which are reducible one to another in polynomial slowdown, and since the best known algorithm for any of these problems is exponential, the widely believed conjecture is that there is no polynomial algorithm that can solve them. But while clearly P⊆NP, proving or disproving the conjecture that P ≠ NP remains perhaps one of the most important open questions in computer science and complexity theory.

Although the original Church-Turing thesis involved the abstract mathematical notion of computability, physicists as well as computer scientists often interpret it as saying something about the scope and limitations of physical computing machines. For example, Wolfram (1985) claims that any physical system can be simulated (to any degree of approximation) by a universal Turing machine, and that complexity bounds on Turing machine simulations have physical significance. For example, if the computation of the minimum energy of some system of n particles requires at least an exponentially increasing number of steps in n, then the actual relaxation of this system to its minimum energy state will also take an exponential time. Aharonov (1998) strengthens this thesis (in the context of showing its putative incompatibility with quantum mechanics) when she says that a probabilistic Turing machine can simulate any reasonable physical device at polynomial cost. Further examples for this thesis can be found in Copeland (1996). In order for the physical Church-Turing thesis to make sense we have to relate the space and time parameters of physics to their computational counterparts: memory capacity and number of computation steps, respectively. There are various ways to do that, leading to different formulations of the thesis (Pitowsky 1990). For example, one can encode the set of instructions of a universal Turing machine and the state of its infinite tape in the binary development of the position coordinates of a single particle. Consequently, one can physically ‘realize’ a universal Turing machine as a billiard ball with hyperbolic mirrors (Moore 1990, Pitowsky 1996). For the most intuitive connection between abstract Turing machines and physical devices see the pioneering work of Gandy (1980), simplified later by Sieg and Byrnes (1999). It should be stressed that there is no relation between the original Church-Turing thesis and its physical version (Pitowsky and Shagrir 2003), and while the former concerns the concept of computation that is relevant to logic (since it is strongly tied to the notion of proof which requires validation), it does not analytically entail that all computations should be subject to validation. Indeed, there is a long historical tradition of analog computations which use continuous physical processes (Dewdney 1984), and the output of these computations is validated either by repetitive "runs" or by validating the physical theory that presumably governs the behavior of the analog computer.

1.2 Physical "Short-cuts" of Computation

Do physical processes exist which contradict the physical Church-Turing thesis? Apart from analog computation, there exist at least two counter-examples to this thesis that purport to show that the notion of recursion, or Turing-computability, is not a natural physical property (Pour-el and Richards 1981, Pitowsky 1990, Hogarth 1994). Although the physical systems involved (a specific initial condition for the wave equation in three dimensions and an exotic solution to Einstein's field equations, respectively) are somewhat contrived, recent years saw the emergence of the thriving school of "hypercomputation" that aspires to extend the limited examples of physical "hypercomputers" and in so doing to physically "compute" the non-Turing-computable (for a review see Copeland 2002; for a criticism—Davis 2003). Quantum hypercomputation is rarely discussed in the literature (See, e.g., Calude et al. 2003), but the most concrete attempt to harness quantum theory to compute the non-computable is the suggestion to use the quantum adiabatic algorithm (see below) to solve Hilbert's Tenth Problem (Kieu 2002, 2004)—a Turing-undecidable problem equivalent to the halting problem. Recent criticism, however, has exposed the unphysical character of the alleged quantum adiabatic hypercomputer (see Smith 2005, Hodges 2005, and Hagar and Korolev 2006).

Setting aside the hype around "hypercomputers", even if we restrict ourselves only to Turing-computable functions and focus on computational complexity, we can still find many physical models that purport to display "short-cuts" in computational resources. Consider, e.g., the DNA model of computation that was claimed (Adleman 1994, Lipton 1995) to solve NP-complete problems in polynomial time. A closer inspection shows that the cost of the computation in this model is still exponential since the number of molecules in the physical system grows exponentially with the size of the problem. Or take an allegedly instantaneous solution to another NP-complete problem using a construction of rods and balls (Vergis et.al. 1986) that unfortunately ignores the accumulating time-delays in the rigid rods that result in an exponential overall slowdown. Another example is the physical simulation of the factorization of numbers into primes that uses only polynomial resources in time and space, but requires an exponentially increasing precision. It thus appears that all these models cannot serve as counter-examples to the physical Church-Turing thesis (as far as complexity is concerned) since they all require some exponential physical resource. Note, however, that all these models are based on classical physics, hence the unavoidable question: Can the shift to quantum physics allow us to find "short-cuts" in computational resources? The quest for the quantum computer began with the possibility of giving a positive answer to this question.

1.3 Milestones

The idea of a computational device based on quantum mechanics was explored already in the 1970s by physicists and computer scientists. As early as 1969 Steven Wiesner suggested quantum information processing as a possible way to better accomplish cryptologic tasks. But the first four published papers on quantum information (Wiesner published his only in 1983), belong to Alexander Holevo (1973), R.P. Poplavskii (1975), Roman Ingarden (1976) and Yuri Manin (1980). Better known are contributions made in the early 1980s by Charles H. Bennett of the IBM Thomas J. Watson Research Center, Paul A. Benioff of Argonne National Laboratory in Illinois, David Deutsch of the University of Oxford, and the late Richard P. Feynman of the California Institute of Technology. The idea emerged when scientists were investigating the fundamental physical limits of computation. If technology continued to abide by "Moore's Law" (the observation made in 1965 by Gordon Moore, co-founder of Intel, that the number of transistors per square inch on integrated circuits had doubled every 18 months since the integrated circuit was invented), then the continually shrinking size of circuitry packed onto silicon chips would eventually reach a point where individual elements would be no larger than a few atoms. But since the physical laws that govern the behavior and properties of the putative circuit at the atomic scale are inherently quantum mechanical in nature, not classical, the natural question arose whether a new kind of computer could be devised based on the principles of quantum physics.

Feynman was among the first to attempt to provide an answer to this question by producing an abstract model in 1982 that showed how a quantum system could be used to do computations. He also explained how such a machine would be able to act as a simulator for quantum physics. Feynman also conjectured that any classical computer that will be harnessed for this task will do so only inefficiently, incurring an exponential slowdown in computation time. In 1985 David Deutsch proposed the first universal quantum Turing machine and paved the way to the quantum circuit model. The young and thriving domain also attracted philosophers' attention. In 1983 David Albert showed how a quantum mechanical automaton behaves remarkably differently from a classical automaton, and in 1990 Itamar Pitowsky raised the question whether the superposition principle will allow quantum computers to solve NP-complete problems. He also stressed that although one could in principle ‘squeeze’ information of exponential complexity into polynomially many quantum states, the real problem lay in the efficient retrieval of this information.

Progress in quantum algorithms began in the 1990s, with the discovery of the Deutsch-Josza oracle (1992) and of Simon's oracle (1994). The latter supplied the basis for Shor's algorithm for factoring. Published in 1994, this algorithm marked a ‘phase transition’ in the development of quantum computing and sparked a tremendous interest even outside the physics community. In that year the first experimental realization of the quantum CNOT gate with trapped ions was proposed by Cirac and Zoller (1995). In 1995, Peter Shor and Andrew Steane proposed (independently) the first scheme for quantum error-correction. In that same year the first realization of a quantum logic gate was done in Boulder, Colorado, following Cirac and Zoller's proposal.

In 1996, Lov Grover from Bell Labs invented the quantum search algorithm which yields a quadratic "speed-up" compared to its classical counterpart. A year later the first NMR model for quantum computation was proposed, based on nuclear magnetic resonance techniques. This technique was realized in 1998 with a 2-qubit register, and was scaled up to 7 qubits in the Los Alamos National Lab in 2000.

Starting from 2000 the field saw an exponential growth. New paradigms of quantum algorithms have appeared, such as adiabatic algorithms, measurement-based algorithms, and topological-quantum-field-theory-based algorithms, as well as new physical models for realizing a large scale quantum computer with cold ion traps, quantum optics (using photons and optical cavity), condensed matter systems and solid state physics (meanwhile, the first NMR model had turned out to be a dead-end with respect to scaling; see DiVicenzo 2000). The basic questions, however, remain open even today: (1) theoretically, can quantum algorithms efficiently solve classically intractable problems? (2) operationally, can we actually realize a large scale quantum computer to run these algorithms?