{\huge What is our Mundus Computationalis?}
Final summations of the Kalai-Harrow debate
\vspace{0.2 in}
William Unruh is a professor in the Theoretical Physics group of the University of British Columbia. He is known for the Unruh effect, which predicts that an accelerating observer will feel radiation that an observer in constant motion will not. This does not contradict relativity, but experimental tests have so far produced debatable results. He also wrote a paper soon after Peter Shor's famous algorithm appeared in 1994, showing how quantum noise would swamp a direct implementation of it. This was an early spur for quantum error correction, and Unruh is said to have withdrawn his skepticism after the quantum fault-tolerance theorem was established.
This week we conclude our debate between Gil Kalai and Aram Harrow on the scalability of quantum computers. The Latin in our title asks, what kind of computational world do we live in: one with quantum supremacy or classical control?
Unruh's Science Canada profile (see also this Q&A) includes this quotation in his own voice:
``William Blake said `If a fool would persist in his folly, he would become wise.' I'm grateful that society allows me and other scientists to persist.''
On behalf of Aram we note that Unruh has not persisted in his critical stance---at least the quantum computing notes on Unruh's group webpage are skepticism-free. On behalf of Gil, we note the value of persisting to make us wiser, besides the possibility of ultimately being right. We have persisted since January, and on behalf of Dick, I (Ken) thank both principals.
Our debate ends with a summation by Gil and then Aram. The first part of the summation turned into a renewed discussion of Conjecture 1. In this post Gil begins by responding to Aram's two other detailed defenses in our first round of the debate, posts II and III. He then reviews his original conjectures in light of these responses, our debate's second round in which one conjecture was partially refuted, and copious back-and-forth discussion in many comments to these posts. Then Aram gives his summation, updating the considerations that answered Unruh's questions long ago. We conclude with thanks and some ``what could be next?'' considerations.
Gil Kalai: An overall look at Aram's posts
I will continue to discuss the arguments that Aram made in his interesting posts. Aram raised important technical and conceptual issues, and some of the conceptual differences between our points of view will certainly stay unsettled for quite a while. However, with the exception of Conjecture C as we discussed earlier, all the technical arguments against my conjectures raised by Aram's are either weak to start with or incorrect.
Aram's classical fault-tolerance test indeed poses a major challenge to quantum information theory, but I regard my explanation that is based on Conjecture 1 not only as adequate but also as advantageous compared to other explanations. Aram's recent study of Conjecture 1 is useful, but his formulation is incomplete and his interpretation of the conjecture is incorrect. We discussed these matters in the previous post.
The concern that my conjectures are in conflict with the linearity of quantum mechanics is wrong---as exemplified by (among other things) Aram's own shadow qubit example. The technical claim from the second post that pairwise positive correlation is insufficient to imply error-synchronization is incorrect. Aram's thought experiments from the third post are interesting but do not support Aram's claim. The specific claims that the high correlation of the kind predicted by my conjectures are not physical are not correct, and this general issue represents an important difference on appropriate modeling of noise. We will discuss some of these matters now.
Correlated Errors and Entanglement
A quick review
Central to my argument is the assertion that noise can be highly correlated and the assertion that errors will be highly correlated when we try to create highly entangled states. Before discussing Aram's objections to my conjectures I would like to emphasize two important points.
The first is that the ultimate obstacle failing universal quantum computers is simply that the rate of noise will be too high. Correlation is an important part of the picture, but the first-order obstacle is the rate of noise. We need at least a thousand-fold improvement in the current rate even to approach the FT threshold, but the effect of high correlation shows in that while trace distance is invariant under unitary operations, for highly correlated errors the rate in terms of qubit errors scales up linearly compared to the rate in terms of trace distance. It is the rate in terms of qubit errors that is relevant to error-correction.
The second point is the distinction between special-purpose devices and general-purpose devices which was discussed in the previous post. I regard my conjectures as describing noisy special-purpose quantum devices, and this description will explain why general-purpose quantum devices cannot be built.
Many of the arguments and intuitions against my conjectural picture of noisy quantum devices are based on taking for granted a scenario of a universal quantum device. For example, Peter Shor commented:
``...the universe needs to know what code you are using in order to foil you. This attributes both more intelligence and more malice to the universe than I am willing to believe that it has.''
This makes sense if the code is created on a universal machine. But if every code (that can be created at all) requires a specific machine, then neither malice nor intelligence on the part of the universe to tell them apart is required.
The main objections to my conjectures
Conjectures 3 and 4 proposed an explicit relation between the intended state and the noise. Conjecture 3 asserts that a pair of entangled qubits are subject to error (information leak) with positive correlation. Conjecture 4 asserts that qubits whose state is very entangled are subject to error-synchronization, namely there is a substantial probability for errors effecting a large fraction of all qubits. In my papers I present a formal mathematical description of Conjectures 3 and 4 and point out that a certain strong form of Conjecture 3 implies Conjecture 4. One idea that was raised in the discussion (especially with commenter matt) is that error-synchronization follows from Conjecture 1, for every quantum error-correction code that genuinely depends on $latex {n}&fg=000000$ qubits.
The main objections to my conjectures were:
$latex {\bullet}&fg=000000$ 1) The conjectured properties of noise violate the linearity of quantum mechanics.
To the best of my judgement this is not true. Let me mention three points:
The class of Hamiltonian quantum evolutions (namely smoothed Lindblad evolutions) that describe the ``bad noise'' forms a subclass of all Hamiltonian evolutions of system and environment.
Aram described a quantum-theoretic scenario (shadow qubits) that may support the phenomenon I conjecture. The same applies to variations on John Preskill's models.
There are strong reasons to believe that the conjectures do apply in many cases, e.g., for small quantum computers, say those with at most 20 qubits. Why isn't this a violation of quantum mechanics linearity?
My conjectures can be regarded as nonlinear behavior of decoherence that are fully supported by quantum mechanics. The claim regarding QM's linearity also manifests a confusion between the behavior of special-purpose machines and of general-purpose machines.
$latex {\bullet}&fg=000000$
The conjectured properties of noise violate ``known physics.''
My response to this important point raises the central issue of choosing between two approaches for modeling:
The first approach is that in modeling open quantum systems we should take ``known physics'' as part of the noise model.
It turns out that if you impose the ``known physics'' on the noise it leads to surprising ``unknown physics'' that we will be able to create by controlled quantum evolutions via quantum fault-tolerance. Joe Fitzsimons' 2-locality suggestions as well as John Preskill's model represent this approach.
The second approach is to let the ``known physics'' (both for the noise and for the signal) emerge from the model itself, for special-purpose machines that emulate quantum evolutions.
My modeling follows the second approach. I expect that my models of noise will confirm these ``known physics'' insights both for the noise and for the signal. I find the approach appealing since there is no reason to believe that when we have a natural quantum system the laws that describe the interaction of the system with the environment will be different from those that describe the interactions within the system.
Joe Fitzsimons's 2-locality argument was a specific attempt to explain why my conjectures violate known physics. It turned out, however, that a strict version of it is too strong to be realistic (see this comment by Preskill), while a weak form of 2-locality is too weak to be in conflict with my conjectures. But again, my main objection to Joe's attempt was different: insights of this kind should emerge from the model rather than being imposed on the model.
While I disagree with these two out-of-hand reasons to reject my conjectures, it is quite possible that my conjectures are simply false. They should be carefully examined on a case-by-case basis. Be that as it may, my conjectures are proposed also as a formal description for non-FT quantum evolutions even in a reality that allows or even demonstrates quantum fault-tolerance.
Towards a theory of non-FT quantum evolution
Switching the Quantum FT option OFF
Suppose that we have universal quantum computers, on every desk and every lap. One way to think about my conjectures is as follows. Let the quantum computers be equipped with a quantum-FT on/off switch, such as noise-cancelling earphones have. My conjectures describe the behavior of noise when the quantum-FT switch is off. Moreover, many of the attempts to build quantum computers or to create experimental quantum evolutions do not apply fault-tolerance schemes to start with, so quantum computers with the quantum-FT turned off should suffice to describe them.
A basic noise example
The most basic example of noise when the quantum FT is switched off is the following.
You have an algorithm described by a unitary operator $latex {U}&fg=000000$. Let $latex {E}&fg=000000$ be a standard simple noise operation. Consider the noise operation $latex {UEU^{-1}}&fg=000000$. In words, this means that the noise at the end of the evolution is like applying $latex {E}&fg=000000$ at the beginning and then running the unitary operator $latex {U}&fg=000000$ noiselessly. My conjectures say that some part of the overall noise behaves just like this example.
Smoothed Lindblad evolutions
Start with a general (time-dependent) Hamiltonian evolution defined between times $latex {t=0}&fg=000000$ and $latex {t=1}&fg=000000$ on the system and the environment. Let $latex {{\cal H}}&fg=000000$ denote the Hilbert space describing the system. The evolution can be described infinitesimally via the sum of two superoperators $latex {A_t}&fg=000000$ and $latex {B_t}&fg=000000$, where $latex {A_t A}&fg=000000$ describes a unitary evolution on $latex {{\cal H}}&fg=000000$ and $latex {B_t}&fg=000000$ describes the ``noise.''(I assume that $latex {B_t}&fg=000000$ is Lindbladian and correspond to POVM measurement. But we can consider more general or more restricted scenarios.) Let $latex {U_{s,t}}&fg=000000$ be the unitary operator describing the evolution between times $latex {s}&fg=000000$ and $latex {t}&fg=000000$. Note that unitary operators on $latex {{\cal H}}&fg=000000$ induce an action on super-operators.
Let $latex {K}&fg=000000$ be a positive kernel defined on $latex {[-1,1]}&fg=000000$. (In order to include standard noise we can allow an atom at 0.) The smoothing operator is obtained by replacing $latex {B_t}&fg=000000$ by a weighted average of a 1-parameter family of superoperators of the form $latex {U_{s,t}B_s U^{-1}_{s,t}}&fg=000000$ with weight $latex {K(s-t)}&fg=000000$. (Here $latex {s}&fg=000000$ varies over all times between 0 and 1; it seems important to take times in the future as well as times in the past.) The discrete-time model is especially simple. I propose these smoothed-in-time noise of this kind as a formal description of noise "without fault-tolerance". Namely, this type of approximations to pure evolutions expresses formally the idea that quantum fault-tolerance was not activated and at the same time will not allow it.
\section *{Other topics in brief}
The debate touched on many technical and conceptual issues, and I tried to round up many of them in this comment, plus some follow-ups. Let me relate some here:
\subsubsection *{Hamiltonian models and concentration of the number of errors} In various stochastic/Hamiltonian models for noise we witness Gaussian concentration of the number of errors. I regard this feature as unrealistic, and believe such models abstract away a crucial aspect of noise that may or may not allow fault tolerance. See this comment and the previous post.
\subsubsection *{Bose Einstein-condensation; superconductivity; teleportation}
I proposed to examine Conjecture 1 (adapted) on Bose-Einstein condensation, and discussed it mainly in this thread.
David Deutsch proposed (privately) superconductivity as a counterexample for my strong principle of noise. Peter Shor proposed Kitaev's teleportation-based quantum computing as a counterexample to my conjectures. (I have some reasons to think that smoothed Lindblad evolutions exclude this type of quantum computing.)
\subsubsection *{Perturbation methods; renormalization group} Understanding the feasible approximations for pure quantum states and evolutions is of interest not only in the context of controlled quantum evolutions. Non FT-evolutions are relevant for describing a quantum evolution on a small Hilbert space that neglect some of the degrees of freedom of a quantum system.
John Sidles, Aram, Hal Swyers, and other commentators mentioned some connection with perturbation theory, and indeed this is a direction that I find very interesting.
In a private discussion Aram mentioned the renormalization group as an example that there can be an effective theory at some scale (i.e. low energy, or large distances) that results from "integrating out" dynamics at higher energies/shorter distances/higher frequencies/etc, Again, this can be related to questions about modeling noise coming from neglecting internal structures of our qubits (or qudits).
\subsubsection*{Censorship}
My Conjecture C from the first post was refuted by Aram in conjunction with Steve Flammia. We discussed this matter in this post, where we also discussed a related 1980 paper by Anthony Leggett. The post on censorship discusses bounded-depth quantum circuits as "Shor/Sure separators," an idea that goes back to Unruh. Other alternatives for Conjecture C were offered by John Sidles in this comment based on tensor-rank, and also here, and by me in this comment (based on Pauli-expansion).
If small-depth quantum computation is indeed the limit of approximated pure evolutions this suggests that quantum computation not only offers no computational superiority, but rather that interesting computation in our world is classically controlled.
\subsubsection *{Can the computation hide the geometry and physics?} One insight of classical simulation that people tend to take for granted in the quantum case is that computation abstracts away the physics and geometry of the simulated device. I challenge this insight and our third-round post was devoted to this issue.
\subsubsection *{Anyons} There are fascinating suggestions for robust-to-noise states that (in principle) can be created experimentally, see this review paper.
The high-level concern is that since the proposed experimental process does not involve FT, the noisy states will satisfy Conjecture 1 and will not exhibit the expected robustness to noise. However, the description of how the stability of certain states increases
e.g. when two anyons are taken apart is quite convincing and it will worth the effort to try giving a specific physical mechanism that supports my conjectures in this case..
\subsubsection *{Simulating quantum physics on classical computers} The connections with classical simulations of quantum physics was mentioned quite a few times. A basic question is, what can be said about computations in physics that clearly requires exponential number of steps on digital computer? Aren't such computations (that apparently nature carry on routinely) a clear evidence for ``quantum supremacy?'' My guess is that in cases where the computation is untractable. the answer is irrelevant.
\subsubsection *{The rate of noise} Neither the ordinary models of noise nor my conjectures for non-FT noise give a lower bound on the rate of noise. One proposal in my paper, drawn after some work by Sergio Boixo, Lorenza Viola, and Gerardo Ortiz, is that the rate of noise in a time interval is bounded below by a measure of noncommutativity of the unitary evolutions $latex {U_{s,t}}&fg=000000$ in this interval.
\subsubsection*{Non-flat substitutes for Hilbert spaces} John Sidles offered, over many comments, a certain geometric theory of quantum evolutions on non-flat spaces, and connected it with various aspects of our debate. This is certainly very interesting.
\subsubsection *{Non FT classical evolutions} It can be of interest (and perhaps harder compared to the quantum case) to try to describe classical evolutions that do not enable/hide fault tolerance and (long) computation.
Aram Harrow: Rebuttal and Summation
Quantum mechanics has been a counter-intuitive theory from its birth, and the theory of quantum information in a sense aims to distill the strangest aspects of quantum mechanics. So it is not surprising that they both have faced a chorus of skepticism from the start, including echoes of today's debate. The problem with the skeptic's task---from the original EPR paper to the Gil's contemporary objections---is that their arguments entail an alteration of this supremely successful theory. Not only does the evidence unequivocally support quantum mechanics, there is not even any candidate theory that could replace it, waiting in the wings for experimental confirmation.
Gil's conjectures, then, are really pre-conjectures---not inferred consequences of existing theory but rather linchpins of an alternative one. He conjectures that a consistent theory can be found that will encompass both the impossibility of quantum computers and the last century of experiment supporting quantum mechanics. This is a worthy goal and a fascinating intellectual challenge. But it is at least very hard, and is probably impossible, as evidenced in part by the difficulty that Gil faces in making precise statements about his conjectures.
More concretely, he can make precise statements about his conclusions, such as ``real-world quantum computers experience noise that makes them classically simulable,'' but not about any microscopic claim about physics, even in any concrete model, that could plausibly lead to these conclusions. Similarly, his conjectures have difficulty avoiding reference to the ``intended'' operation of a quantum device, although any physical principle we discover is unlikely to depend on our intentions.
Of course, Gil does suggest a number of physical mechanisms that are candidates for de-railing quantum computing, or at least refuting the assumptions of the FT threshold theorem. He discusses self-synchronizing metronomes, smoothed Lindblad evolutions, constant-size fluctuations in temperature, special-purpose quantum computers introducing correlations in errors, and other ideas.
Why I believe quantum computing should be possible
I have tried to explain why I don't believe Gil's conjectures. But it is further important to ask why I, or anyone else, should believe that quantum computing is a plausible technological possibility.
Part of it, as I've said, is believing in quantum mechanics itself, with its exponentially large Hilbert spaces, its entanglement, its interference, and all its other counter-intuitive features. Quantum mechanics did not succeed by being a more intellectually appealing theory than its competitors. On the contrary, the route from Schrödinger's equation to EPR to Bell's theorem to secure quantum communication took over 50 years, even though today it can be covered in a few hours of lectures. This delay is basically because many of the principals (most notably Albert Einstein) didn't want quantum mechanics to be true. Instead, many preferred to think about interpretations such as Bohmian mechanics that hold close to older intuitions, but are dogs' breakfasts to work with.
Quantum mechanics succeeded by simply giving the right answer, over and over again, while no other theory could come close. And now, with the modern quantum-information perspective, many of the conceptual difficulties of the theory are being tamed. This textbook is a great example of the new pedagogical approach.
But couldn't we have quantum mechanics without quantum computing? Isn't that what this debate is all about? After all, as John Sidles has mentioned several times, many perturbation expansions are hard to compute, or not even convergent, and these can give rise to complicated many-body terms even when the original Hamiltonian is simple. Nothing about Schrödinger's equation guarantees that we'll ever see anything resembling textbook quantum mechanics in the real world.
Let me give a simple example of how things could have been much worse for us. Electrons are thought to be pointlike particles, but suppose that one day we discover they are, like protons, composites of much smaller particles. These particles could have their own dynamics that could ruin a two-slit experiment by effectively decohering the ``which path'' information. However, this turns out not to happen, and we do observe electron interference. Moreover, such alternate theories would work via the Pauli exclusion principle to alter the periodic table as we know it, so we can rule them out without any fancy experiments.
In practice, we see many areas of quantum coherence even in current areas of technology. We have multiple-quantum coherence in NMR, Bose-Einstein condensates, superconductivity (with topological effects protection), lasers, and simply the fact that transistors work the way we expect. Higher-order perturbation theory, or some new physics, could have derailed any of these technologies, but did not---a fact explained in part by renormalization.
Thanks
Gil : This debate gave me a great opportunity, for which I am thankful, to discuss my work on quantum computers. First and foremost let me thank Aram Harrow for initiating a public discussion of my paper and for being my partner in this debate. Aram did a great job in presenting the case for why quantum computers are feasible. He presented serious and interesting objections to my conjectures, and made many enlightening comments on technical and conceptual levels. I thank also the many commentators, and allow me to mention John Sidles for the breadth and spirit of his involvement. There were many excellent comments and questions by many people; I tried to respond to questions raised to me, and I enjoyed reading some of the further discussion and interesting ideas not related to my work.
Of course, I am thankful to Ken Regan and Dick Lipton for hosting and editing the debate, for summarizing the earlier threads, for adding exciting introductions, for illuminating private discussions with Aram and me on the matter, and for very interesting related and unrelated blog posts during this time.
Aram: I also want to thank Gil for his discussions both here, and in many private emails. It's been useful for me to think through these things, and has caused me to think about skepticism in a new way. (For example, am I like one of those establishment scientists who rejected continental drift out of hand? How certain can I be about my position on other controversial things, like the probability of global warming, or relativity, or my own atheism?) It has led to my work with Steve Flammia on Conjecture C (arxiv:1204.3404), and a new project on adversarial noise (joint with Matt Hastings and Anup Rao) that I'll talk about soon on the Pontiff blog.
Also, I wish to thank the many commenters, who have gone far beyond the original posts into many interesting realms. I hope that my participation here will turn out well for the field of quantum information, which has faced a large amount of skepticism and misunderstanding from the field of computer science. Many arguments in favor of quantum computing inevitably stem from physics; however, I hope that this will help encourage dialogue and mutual understanding between physics and computer science more generally. Finally, I am grateful to Ken and Dick for hosting, editing and framing this debate, and for the contributions of their blog more generally.
Conclusions
(Gil:) The construction of universal quantum computers is a serious scientific possibility and an excellent working assumption. Building universal quantum computers will be a once-in-a-lifetime scientific and technological achievement. Operating quantum computers will lead to further terrific scientific and technological fruits, for decades, just like the discovery of X-rays. Quantum error-correction is an important concept for building fault-tolerant quantum computers and is a fundamental scientific concept on its own. Building quantum computers is an endeavor which should vigorously be pursued, and indeed it is pursued theoretically and experimentally by top researchers who already established impressive achievements. My university, The Hebrew University of Jerusalem, just established a new Center for Quantum Information.
Yet, it may well be the case that universal quantum computers are not possible, and that ``no quantum fault-tolerance'' should be taken as the guiding principle in modeling quantum decoherence. Developing a theory of non-fault-tolerant quantum evolutions, and studying how quantum computers can fail, is an interesting and important endeavor. Non-FT quantum (and classic) evolutions may shed new light on existing models, offer new relevant models for quantum evolutions, and lead to substantial new insights on related physical questions, models, theories, and computational methods.
Finally, it can be useful to draw two extreme scenarios as possible conclusions for this debate. One scenario coined as ``quantum supremacy'' by Preskill asserts that the quantum world offers and enables superior forms of computational complexity. The other scenario which we can call ``classical control'' asserts that computation in our quantum world can only be based (via a repetition code or a similar mechanism) on classical information and that realistic nearly-pure quantum evolutions are (approximately) of bounded depth.
(Aram:) The quest to build quantum computers has been fraught with difficulties, but to my knowledge none of these are rooted in error correlations increasing with system sizes. Rather, they involve the sort of problems that elephants would have in dissecting fleas. Sometimes we don't know how to decouple noise sources, sometimes we have a hard time addressing qubits individually instead of collectively, sometimes our fabrication techniques are unreliable, and sometimes we can initialize or coherently manipulate or measure, but not all three. All of these are important challenges, and some could lead us to abandon a technology (e.g. if we need an optical table the size of a football field), but none suggests a fundamental showstopper.
At a certain point, we gain confidence that there is no monster hiding under the bed, not only because there hasn't been one before, but because nothing about our knowledge of the world would suggest such a monster.
Open Problems
Choose a platform that is currently being considered for an experimental implementation of quantum computing (e.g. ion traps, linear optics, superconducting qubits, etc.) and come up with a concrete noise model (i.e. derived from Schrödinger's equation) that (a) is consistent with all existing experiments, and (b) provably rules out scalable quantum computing.
Does Conjecture 1 suffice to reduce the computational power of noisy quantum computers to BPP? Show that basing decoherence on smoothed Lindblad evolutions supports Conjectures 1-4 and leaves us with BPP.
Can smooth Lindblad operations be used in a non-trivial way to show how classical memory and computation emerges under noise.
What is the correct picture of our mundus computationalis (Latin for ``computational universe''): quantum supremacy, classical control, or something in the middle?