Quantum Fault-Tolerance and Error Correction
Quantum computing is expected to enable an exponential speedup for certain problems, but only if quantum gates can work reliably enough. The notion of "effective" universal quantum computation is very much driven by the notions of error correction. Defeating the effects of decoherence is one of the essential requirements of quantum computation. The known effective quantum error correction codes involve specific one- and two-qubit operations; we know that they can be effective in a planar geometry, in which only short-distance couplings between qubits in two dimensions are needed.
One strand of research on quantum error correction pertains to the notion of 'self-correcting' quantum computers which are not actively stabilized by quantum measurements but by the presence of macroscopic energy barriers preventing error excitations from accumulating. Self-correction is also linked to the idea of local error-correction (by local dissipative engineering). Self-correction has been shown to be possible for quantum systems in four spatial dimensions and strong evidence exists for its impossibility in two dimensions. The situation in three dimensions, in particular for so-called topological subsystem codes, is not yet well understood.
Quantum codes defined on low-dimensional spatially-regular lattices such as the surface code exhibit an unfavorable trade-off between distance and rate which leads to considerable qubit-overhead in a quantum memory or computation. Quantum LDPC (low-density parity check) stabilizer codes are classes of codes which have a constant rate, parity checks which act on O(1) qubits and a distance growing with the number of qubits. Open questions for these codes are values for concrete noise thresholds, how to decode such codes in practice, whether families of such codes allow for an embedding in a low-dimensional system with practically-tolerable long-distance communication.
Physikalische Umsetzung von QuantencodesCopyright: David DiVincenzo
The essence of quantum error correction is the measurement of sets of commuting operators. One possible way to measure these multi-qubit operators is the operation of a quantum circuit followed by single-qubit measurements. But it is also possible to consider physical settings in which the required multi-qubit non-demolition measurements can be implemented directly (direct parity measurement) or the outcome of weak parity measurements is continuously used to keep the quantum data in the code space (continuous quantum error correction). We are interested in exploring such alternatives in particular for the paradigm of circuit QED where qubit read-out is performed by dispersive coupling with attenuated microwaves.Copyright: Tobias Hoelzer
For the purpose of a quantum memory using superconducting qubits, we consider physically attractive modifications of the surface code architecture. An example is the [[4,2,2]]-concatenated surface code in which each [[4,2,2]] unit corresponds to a 3D micro-wave cavity which is subjected to intercavity and intra-cavity parity checks. Another example is the use of 2D or 3D cavity modes to encode qubits directly and realize approximate qubit-into-oscillator code states using Josephson-junction induced nonlinearities as well as linear optical elements such as squeezing and homodyne detection.
- "Quantum Error Correction for Quantum Memories" Barbara M. Terhal (2013)
- "Multi-qubit parity measurement in circuit quantum electrodynamics", David P. DiVincenzo and Firat Solgun, New Journal of Physics (2013)
- "The Fragility of Quantum Information?" B.M. Terhal, Lec. Notes in Comp. Science, Vol. 7505, pp. 47-56 (2012)