Quantum computing holds the promise of solving problems that are currently beyond the capabilities of classical computers. However, despite remarkable progress in recent years, today’s quantum machines remain limited in what they can realistically achieve. The primary reason is a fundamental challenge: errors.
To understand why error correction is essential and what is meant by fault-tolerant quantum computing, it is necessary to look more closely at the nature of quantum information itself.
Why are qubits so fragile?
In classical computing, information is stored in bits that take the value 0 or 1, typically using transistors engineered for extreme stability. Quantum computing, by contrast, relies on qubits, which are inherently sensitive to noise. Qubits are implemented using very fundamental physical systems, such as the spin of an electron, atomic energy levels, or oscillatory states in superconducting circuits.
These systems are never perfectly isolated. An electron, for instance, inevitably interacts with its environment: nearby particles, surrounding materials, electromagnetic fields, or even cosmic radiation. In some cases, a single high-energy particle, such as a gamma ray, can disturb a qubit’s state and corrupt the information it carries.
This unavoidable interaction with the environment is what makes qubits intrinsically fragile and highly susceptible to noise.
A high and difficult-to-control error rate
Every operation performed on a qubit comes with a non-negligible probability of error. Today, when a quantum gate is applied, meaning an operation that transforms a quantum system from one state to another, the typical error rate is on the order of 10⁻³ to 10⁻⁴. As a result, after roughly 1,000 to 10,000 operations, the probability that at least one error has occurred becomes close to certainty.
In practical terms, this means that errors are almost guaranteed after only a few thousand operations. This is a major obstacle, because key quantum algorithms, such as Shor’s algorithm for cryptography or Grover’s algorithm for research, require millions or even billions of sequential operations. For these algorithms to function reliably, error rates would need to be reduced to around 10⁻⁹ to 10⁻¹².
Without an effective way to manage errors, quantum computing would therefore remain confined to small-scale experimental demonstrations.
The core idea behind quantum error correction
Quantum error correction is built on this principle: information should not be stored in a single qubit. Instead of encoding computation in one physical qubit, the information is distributed across multiple physical qubits to form what is known as a logical qubit.
Using quantum error-correcting codes, it becomes possible to detect errors affecting the physical qubits and correct them without ever directly measuring the underlying quantum information. Taken together, these qubits form a much more robust representation of information.
This idea is not unique to quantum computing. Classical computing has long relied on similar strategies, particularly in memory systems, communications, and safety-critical applications. The key difference lies in the scale of the problem: classical components are relatively stable, whereas qubits are inherently prone to error.
It is therefore crucial to distinguish between two concepts:
- Physical qubit: a real, hardware-implemented qubit that is directly exposed to noise.
- Logical qubit: an abstract qubit constructed from multiple physical qubits and protected by error correction.
The more physical qubits are used to build a logical qubit, the more reliable it becomes. However, this improved reliability comes at the cost of significantly increased hardware requirements. The central challenge of fault-tolerant quantum computing is thus to strike a balance between robustness, resource overhead, and practical feasibility.
Error correction as a prerequisite for scalability
A quantum system is said to be fault-tolerant when it can detect and correct its own errors while continuing to execute complex computations. Only under these conditions can quantum algorithms, as they exist in theory, eventually become practical tools for industry and scientific research.
Although the path forward remains challenging, the underlying scientific principles are now well understood. The key question is no longer whether error correction is necessary, but how it can be implemented efficiently at scale.
This article has outlined the foundations and core principles of fault-tolerant quantum computing. Once these basics are established, further questions naturally arise: what is the true state of the art today? Which technological barriers remain unresolved? And how are industrial players positioning themselves to address these challenges?
These issues are explored in the second article of this series, which takes a more forward-looking view of error-corrected quantum computing.